Test Report: KVM_Linux_crio 19906

                    
                      5f98292058b4faeaee8bae7d05b64f549b3dcccf:2024-11-04:36938
                    
                

Test fail (34/320)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.74
38 TestAddons/parallel/MetricsServer 346.17
47 TestAddons/StoppedEnableDisable 154.22
168 TestMultiControlPlane/serial/StopSecondaryNode 149.56
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.54
170 TestMultiControlPlane/serial/RestartSecondaryNode 6.49
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.56
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 416.5
173 TestMultiControlPlane/serial/DeleteSecondaryNode 173.23
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 57.72
175 TestMultiControlPlane/serial/StopCluster 194.02
176 TestMultiControlPlane/serial/RestartCluster 555.37
232 TestMultiNode/serial/RestartKeepsNodes 318.6
234 TestMultiNode/serial/StopMultiNode 144.92
241 TestPreload 168.8
249 TestKubernetesUpgrade 391.99
328 TestStartStop/group/old-k8s-version/serial/FirstStart 267.62
338 TestStartStop/group/no-preload/serial/Stop 139.49
342 TestStartStop/group/embed-certs/serial/Stop 139.2
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.04
353 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 85.5
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
363 TestStartStop/group/old-k8s-version/serial/SecondStart 724.94
364 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.18
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.2
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.16
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.37
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 451.11
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.05
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 364
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 175.09
x
+
TestAddons/parallel/Ingress (153.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-746456 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-746456 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-746456 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0e748c47-c76c-4e32-a421-8bf0ac2fb2f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0e748c47-c76c-4e32-a421-8bf0ac2fb2f6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004959937s
I1104 10:40:32.133515   27218 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746456 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.84929834s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-746456 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-746456 -n addons-746456
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 logs -n 25: (1.160973896s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| delete  | -p download-only-440707                                                                     | download-only-440707 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| delete  | -p download-only-779038                                                                     | download-only-779038 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| delete  | -p download-only-440707                                                                     | download-only-440707 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-739738 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC |                     |
	|         | binary-mirror-739738                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45149                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-739738                                                                     | binary-mirror-739738 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| addons  | enable dashboard -p                                                                         | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC |                     |
	|         | addons-746456                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC |                     |
	|         | addons-746456                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-746456 --wait=true                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:39 UTC | 04 Nov 24 10:39 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:39 UTC | 04 Nov 24 10:40 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | -p addons-746456                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-746456 ip                                                                            | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-746456 ssh curl -s                                                                   | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-746456 ssh cat                                                                       | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | /opt/local-path-provisioner/pvc-805b188f-c328-4e68-8920-c8c6b1f9c108_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-746456 ip                                                                            | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:42 UTC | 04 Nov 24 10:42 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:37:39
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:37:39.385347   27967 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:37:39.385445   27967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:37:39.385453   27967 out.go:358] Setting ErrFile to fd 2...
	I1104 10:37:39.385457   27967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:37:39.385619   27967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:37:39.386172   27967 out.go:352] Setting JSON to false
	I1104 10:37:39.387012   27967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4810,"bootTime":1730711849,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:37:39.387076   27967 start.go:139] virtualization: kvm guest
	I1104 10:37:39.390070   27967 out.go:177] * [addons-746456] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:37:39.391440   27967 notify.go:220] Checking for updates...
	I1104 10:37:39.391455   27967 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:37:39.392960   27967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:37:39.394322   27967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:37:39.395646   27967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:37:39.396925   27967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:37:39.398215   27967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:37:39.399788   27967 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:37:39.432037   27967 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 10:37:39.433452   27967 start.go:297] selected driver: kvm2
	I1104 10:37:39.433469   27967 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:37:39.433481   27967 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:37:39.434265   27967 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:37:39.434342   27967 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:37:39.450411   27967 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:37:39.450453   27967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:37:39.450652   27967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:37:39.450677   27967 cni.go:84] Creating CNI manager for ""
	I1104 10:37:39.450701   27967 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:37:39.450709   27967 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 10:37:39.450768   27967 start.go:340] cluster config:
	{Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:37:39.450853   27967 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:37:39.452886   27967 out.go:177] * Starting "addons-746456" primary control-plane node in "addons-746456" cluster
	I1104 10:37:39.454154   27967 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:37:39.454180   27967 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:37:39.454186   27967 cache.go:56] Caching tarball of preloaded images
	I1104 10:37:39.454265   27967 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:37:39.454278   27967 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:37:39.454553   27967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/config.json ...
	I1104 10:37:39.454573   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/config.json: {Name:mk7f355297e64314e7f2737f1ad3b6060652fcdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:37:39.454711   27967 start.go:360] acquireMachinesLock for addons-746456: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:37:39.454766   27967 start.go:364] duration metric: took 39.347µs to acquireMachinesLock for "addons-746456"
	I1104 10:37:39.454789   27967 start.go:93] Provisioning new machine with config: &{Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:37:39.454840   27967 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 10:37:39.456617   27967 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1104 10:37:39.456722   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:37:39.456759   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:37:39.470888   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1104 10:37:39.471452   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:37:39.471973   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:37:39.471993   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:37:39.472385   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:37:39.472550   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:37:39.472704   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:37:39.472856   27967 start.go:159] libmachine.API.Create for "addons-746456" (driver="kvm2")
	I1104 10:37:39.472900   27967 client.go:168] LocalClient.Create starting
	I1104 10:37:39.472948   27967 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:37:39.699203   27967 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:37:39.841913   27967 main.go:141] libmachine: Running pre-create checks...
	I1104 10:37:39.841935   27967 main.go:141] libmachine: (addons-746456) Calling .PreCreateCheck
	I1104 10:37:39.842401   27967 main.go:141] libmachine: (addons-746456) Calling .GetConfigRaw
	I1104 10:37:39.842807   27967 main.go:141] libmachine: Creating machine...
	I1104 10:37:39.842820   27967 main.go:141] libmachine: (addons-746456) Calling .Create
	I1104 10:37:39.842973   27967 main.go:141] libmachine: (addons-746456) Creating KVM machine...
	I1104 10:37:39.844192   27967 main.go:141] libmachine: (addons-746456) DBG | found existing default KVM network
	I1104 10:37:39.844900   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:39.844770   27989 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I1104 10:37:39.844938   27967 main.go:141] libmachine: (addons-746456) DBG | created network xml: 
	I1104 10:37:39.844955   27967 main.go:141] libmachine: (addons-746456) DBG | <network>
	I1104 10:37:39.844965   27967 main.go:141] libmachine: (addons-746456) DBG |   <name>mk-addons-746456</name>
	I1104 10:37:39.844973   27967 main.go:141] libmachine: (addons-746456) DBG |   <dns enable='no'/>
	I1104 10:37:39.844981   27967 main.go:141] libmachine: (addons-746456) DBG |   
	I1104 10:37:39.844990   27967 main.go:141] libmachine: (addons-746456) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1104 10:37:39.845013   27967 main.go:141] libmachine: (addons-746456) DBG |     <dhcp>
	I1104 10:37:39.845029   27967 main.go:141] libmachine: (addons-746456) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1104 10:37:39.845036   27967 main.go:141] libmachine: (addons-746456) DBG |     </dhcp>
	I1104 10:37:39.845041   27967 main.go:141] libmachine: (addons-746456) DBG |   </ip>
	I1104 10:37:39.845047   27967 main.go:141] libmachine: (addons-746456) DBG |   
	I1104 10:37:39.845060   27967 main.go:141] libmachine: (addons-746456) DBG | </network>
	I1104 10:37:39.845068   27967 main.go:141] libmachine: (addons-746456) DBG | 
	I1104 10:37:39.850312   27967 main.go:141] libmachine: (addons-746456) DBG | trying to create private KVM network mk-addons-746456 192.168.39.0/24...
	I1104 10:37:39.908997   27967 main.go:141] libmachine: (addons-746456) DBG | private KVM network mk-addons-746456 192.168.39.0/24 created
	I1104 10:37:39.909028   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:39.908954   27989 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:37:39.909044   27967 main.go:141] libmachine: (addons-746456) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456 ...
	I1104 10:37:39.909061   27967 main.go:141] libmachine: (addons-746456) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:37:39.909077   27967 main.go:141] libmachine: (addons-746456) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:37:40.160338   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:40.160211   27989 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa...
	I1104 10:37:40.355708   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:40.355570   27989 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/addons-746456.rawdisk...
	I1104 10:37:40.355736   27967 main.go:141] libmachine: (addons-746456) DBG | Writing magic tar header
	I1104 10:37:40.355747   27967 main.go:141] libmachine: (addons-746456) DBG | Writing SSH key tar header
	I1104 10:37:40.355754   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:40.355693   27989 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456 ...
	I1104 10:37:40.355867   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456
	I1104 10:37:40.355889   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:37:40.355901   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456 (perms=drwx------)
	I1104 10:37:40.355911   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:37:40.355921   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:37:40.355931   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:37:40.355947   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:37:40.355955   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home
	I1104 10:37:40.355968   27967 main.go:141] libmachine: (addons-746456) DBG | Skipping /home - not owner
	I1104 10:37:40.355985   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:37:40.356001   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:37:40.356015   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:37:40.356029   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:37:40.356041   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:37:40.356057   27967 main.go:141] libmachine: (addons-746456) Creating domain...
	I1104 10:37:40.357036   27967 main.go:141] libmachine: (addons-746456) define libvirt domain using xml: 
	I1104 10:37:40.357063   27967 main.go:141] libmachine: (addons-746456) <domain type='kvm'>
	I1104 10:37:40.357074   27967 main.go:141] libmachine: (addons-746456)   <name>addons-746456</name>
	I1104 10:37:40.357086   27967 main.go:141] libmachine: (addons-746456)   <memory unit='MiB'>4000</memory>
	I1104 10:37:40.357096   27967 main.go:141] libmachine: (addons-746456)   <vcpu>2</vcpu>
	I1104 10:37:40.357103   27967 main.go:141] libmachine: (addons-746456)   <features>
	I1104 10:37:40.357112   27967 main.go:141] libmachine: (addons-746456)     <acpi/>
	I1104 10:37:40.357121   27967 main.go:141] libmachine: (addons-746456)     <apic/>
	I1104 10:37:40.357133   27967 main.go:141] libmachine: (addons-746456)     <pae/>
	I1104 10:37:40.357142   27967 main.go:141] libmachine: (addons-746456)     
	I1104 10:37:40.357151   27967 main.go:141] libmachine: (addons-746456)   </features>
	I1104 10:37:40.357161   27967 main.go:141] libmachine: (addons-746456)   <cpu mode='host-passthrough'>
	I1104 10:37:40.357169   27967 main.go:141] libmachine: (addons-746456)   
	I1104 10:37:40.357181   27967 main.go:141] libmachine: (addons-746456)   </cpu>
	I1104 10:37:40.357189   27967 main.go:141] libmachine: (addons-746456)   <os>
	I1104 10:37:40.357196   27967 main.go:141] libmachine: (addons-746456)     <type>hvm</type>
	I1104 10:37:40.357204   27967 main.go:141] libmachine: (addons-746456)     <boot dev='cdrom'/>
	I1104 10:37:40.357214   27967 main.go:141] libmachine: (addons-746456)     <boot dev='hd'/>
	I1104 10:37:40.357221   27967 main.go:141] libmachine: (addons-746456)     <bootmenu enable='no'/>
	I1104 10:37:40.357245   27967 main.go:141] libmachine: (addons-746456)   </os>
	I1104 10:37:40.357271   27967 main.go:141] libmachine: (addons-746456)   <devices>
	I1104 10:37:40.357294   27967 main.go:141] libmachine: (addons-746456)     <disk type='file' device='cdrom'>
	I1104 10:37:40.357312   27967 main.go:141] libmachine: (addons-746456)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/boot2docker.iso'/>
	I1104 10:37:40.357319   27967 main.go:141] libmachine: (addons-746456)       <target dev='hdc' bus='scsi'/>
	I1104 10:37:40.357326   27967 main.go:141] libmachine: (addons-746456)       <readonly/>
	I1104 10:37:40.357332   27967 main.go:141] libmachine: (addons-746456)     </disk>
	I1104 10:37:40.357340   27967 main.go:141] libmachine: (addons-746456)     <disk type='file' device='disk'>
	I1104 10:37:40.357349   27967 main.go:141] libmachine: (addons-746456)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:37:40.357359   27967 main.go:141] libmachine: (addons-746456)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/addons-746456.rawdisk'/>
	I1104 10:37:40.357364   27967 main.go:141] libmachine: (addons-746456)       <target dev='hda' bus='virtio'/>
	I1104 10:37:40.357405   27967 main.go:141] libmachine: (addons-746456)     </disk>
	I1104 10:37:40.357435   27967 main.go:141] libmachine: (addons-746456)     <interface type='network'>
	I1104 10:37:40.357449   27967 main.go:141] libmachine: (addons-746456)       <source network='mk-addons-746456'/>
	I1104 10:37:40.357460   27967 main.go:141] libmachine: (addons-746456)       <model type='virtio'/>
	I1104 10:37:40.357469   27967 main.go:141] libmachine: (addons-746456)     </interface>
	I1104 10:37:40.357481   27967 main.go:141] libmachine: (addons-746456)     <interface type='network'>
	I1104 10:37:40.357494   27967 main.go:141] libmachine: (addons-746456)       <source network='default'/>
	I1104 10:37:40.357505   27967 main.go:141] libmachine: (addons-746456)       <model type='virtio'/>
	I1104 10:37:40.357518   27967 main.go:141] libmachine: (addons-746456)     </interface>
	I1104 10:37:40.357528   27967 main.go:141] libmachine: (addons-746456)     <serial type='pty'>
	I1104 10:37:40.357538   27967 main.go:141] libmachine: (addons-746456)       <target port='0'/>
	I1104 10:37:40.357548   27967 main.go:141] libmachine: (addons-746456)     </serial>
	I1104 10:37:40.357567   27967 main.go:141] libmachine: (addons-746456)     <console type='pty'>
	I1104 10:37:40.357592   27967 main.go:141] libmachine: (addons-746456)       <target type='serial' port='0'/>
	I1104 10:37:40.357603   27967 main.go:141] libmachine: (addons-746456)     </console>
	I1104 10:37:40.357611   27967 main.go:141] libmachine: (addons-746456)     <rng model='virtio'>
	I1104 10:37:40.357626   27967 main.go:141] libmachine: (addons-746456)       <backend model='random'>/dev/random</backend>
	I1104 10:37:40.357634   27967 main.go:141] libmachine: (addons-746456)     </rng>
	I1104 10:37:40.357642   27967 main.go:141] libmachine: (addons-746456)     
	I1104 10:37:40.357647   27967 main.go:141] libmachine: (addons-746456)     
	I1104 10:37:40.357658   27967 main.go:141] libmachine: (addons-746456)   </devices>
	I1104 10:37:40.357671   27967 main.go:141] libmachine: (addons-746456) </domain>
	I1104 10:37:40.357683   27967 main.go:141] libmachine: (addons-746456) 
	I1104 10:37:40.363082   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:c6:16:0c in network default
	I1104 10:37:40.363613   27967 main.go:141] libmachine: (addons-746456) Ensuring networks are active...
	I1104 10:37:40.363629   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:40.364265   27967 main.go:141] libmachine: (addons-746456) Ensuring network default is active
	I1104 10:37:40.364622   27967 main.go:141] libmachine: (addons-746456) Ensuring network mk-addons-746456 is active
	I1104 10:37:40.365094   27967 main.go:141] libmachine: (addons-746456) Getting domain xml...
	I1104 10:37:40.365658   27967 main.go:141] libmachine: (addons-746456) Creating domain...
	I1104 10:37:41.736908   27967 main.go:141] libmachine: (addons-746456) Waiting to get IP...
	I1104 10:37:41.737735   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:41.738240   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:41.738274   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:41.738228   27989 retry.go:31] will retry after 233.791989ms: waiting for machine to come up
	I1104 10:37:41.973803   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:41.974186   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:41.974213   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:41.974140   27989 retry.go:31] will retry after 264.314556ms: waiting for machine to come up
	I1104 10:37:42.239425   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:42.239771   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:42.239793   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:42.239722   27989 retry.go:31] will retry after 439.256751ms: waiting for machine to come up
	I1104 10:37:42.680467   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:42.680862   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:42.680881   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:42.680824   27989 retry.go:31] will retry after 587.081953ms: waiting for machine to come up
	I1104 10:37:43.269423   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:43.269899   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:43.269926   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:43.269869   27989 retry.go:31] will retry after 569.474968ms: waiting for machine to come up
	I1104 10:37:43.840617   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:43.841057   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:43.841085   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:43.841009   27989 retry.go:31] will retry after 870.179807ms: waiting for machine to come up
	I1104 10:37:44.712711   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:44.713106   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:44.713144   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:44.713077   27989 retry.go:31] will retry after 776.282678ms: waiting for machine to come up
	I1104 10:37:45.490992   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:45.491335   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:45.491363   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:45.491298   27989 retry.go:31] will retry after 1.478494454s: waiting for machine to come up
	I1104 10:37:46.971872   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:46.972283   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:46.972310   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:46.972242   27989 retry.go:31] will retry after 1.61669354s: waiting for machine to come up
	I1104 10:37:48.590204   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:48.590636   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:48.590662   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:48.590606   27989 retry.go:31] will retry after 1.896747776s: waiting for machine to come up
	I1104 10:37:50.488679   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:50.489117   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:50.489145   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:50.489078   27989 retry.go:31] will retry after 2.7039374s: waiting for machine to come up
	I1104 10:37:53.194165   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:53.194620   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:53.194642   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:53.194576   27989 retry.go:31] will retry after 3.066417746s: waiting for machine to come up
	I1104 10:37:56.263682   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:56.264117   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:56.264143   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:56.264078   27989 retry.go:31] will retry after 3.836132986s: waiting for machine to come up
	I1104 10:38:00.101792   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.102142   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has current primary IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.102188   27967 main.go:141] libmachine: (addons-746456) Found IP for machine: 192.168.39.4
	I1104 10:38:00.102205   27967 main.go:141] libmachine: (addons-746456) Reserving static IP address...
	I1104 10:38:00.102545   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find host DHCP lease matching {name: "addons-746456", mac: "52:54:00:a0:d7:13", ip: "192.168.39.4"} in network mk-addons-746456
	I1104 10:38:00.170807   27967 main.go:141] libmachine: (addons-746456) DBG | Getting to WaitForSSH function...
	I1104 10:38:00.170837   27967 main.go:141] libmachine: (addons-746456) Reserved static IP address: 192.168.39.4
	I1104 10:38:00.170850   27967 main.go:141] libmachine: (addons-746456) Waiting for SSH to be available...
	I1104 10:38:00.173084   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.173495   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.173523   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.173668   27967 main.go:141] libmachine: (addons-746456) DBG | Using SSH client type: external
	I1104 10:38:00.173694   27967 main.go:141] libmachine: (addons-746456) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa (-rw-------)
	I1104 10:38:00.173726   27967 main.go:141] libmachine: (addons-746456) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:38:00.173743   27967 main.go:141] libmachine: (addons-746456) DBG | About to run SSH command:
	I1104 10:38:00.173756   27967 main.go:141] libmachine: (addons-746456) DBG | exit 0
	I1104 10:38:00.301291   27967 main.go:141] libmachine: (addons-746456) DBG | SSH cmd err, output: <nil>: 
	I1104 10:38:00.301594   27967 main.go:141] libmachine: (addons-746456) KVM machine creation complete!
	I1104 10:38:00.301915   27967 main.go:141] libmachine: (addons-746456) Calling .GetConfigRaw
	I1104 10:38:00.309061   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:00.309331   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:00.309504   27967 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:38:00.309520   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:00.310864   27967 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:38:00.310877   27967 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:38:00.310882   27967 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:38:00.310887   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.313254   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.313678   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.313701   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.313849   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.313994   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.314118   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.314214   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.314360   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.314540   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.314552   27967 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:38:00.424313   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:38:00.424335   27967 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:38:00.424345   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.426998   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.427330   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.427357   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.427572   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.427782   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.427985   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.428113   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.428290   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.428455   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.428466   27967 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:38:00.537913   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:38:00.538003   27967 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:38:00.538020   27967 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:38:00.538032   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:38:00.538296   27967 buildroot.go:166] provisioning hostname "addons-746456"
	I1104 10:38:00.538320   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:38:00.538519   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.541142   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.541538   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.541564   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.541744   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.541923   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.542061   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.542190   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.542349   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.542511   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.542524   27967 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-746456 && echo "addons-746456" | sudo tee /etc/hostname
	I1104 10:38:00.665906   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-746456
	
	I1104 10:38:00.665937   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.668558   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.668858   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.668892   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.669014   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.669182   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.669352   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.669497   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.669659   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.669810   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.669826   27967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-746456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-746456/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-746456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:38:00.789259   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:38:00.789290   27967 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:38:00.789330   27967 buildroot.go:174] setting up certificates
	I1104 10:38:00.789348   27967 provision.go:84] configureAuth start
	I1104 10:38:00.789361   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:38:00.789622   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:00.792365   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.792728   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.792755   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.792970   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.795459   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.795802   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.795827   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.795978   27967 provision.go:143] copyHostCerts
	I1104 10:38:00.796062   27967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:38:00.796199   27967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:38:00.796283   27967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:38:00.796388   27967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.addons-746456 san=[127.0.0.1 192.168.39.4 addons-746456 localhost minikube]
	I1104 10:38:00.877715   27967 provision.go:177] copyRemoteCerts
	I1104 10:38:00.877766   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:38:00.877790   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.880401   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.880765   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.880793   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.880952   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.881094   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.881270   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.881385   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:00.966856   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:38:00.989191   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:38:01.011071   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:38:01.033496   27967 provision.go:87] duration metric: took 244.13703ms to configureAuth
	I1104 10:38:01.033525   27967 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:38:01.033705   27967 config.go:182] Loaded profile config "addons-746456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:38:01.033792   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.036396   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.036749   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.036774   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.036943   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.037095   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.037222   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.037360   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.037516   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:01.037666   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:01.037680   27967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:38:01.444556   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:38:01.444581   27967 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:38:01.444589   27967 main.go:141] libmachine: (addons-746456) Calling .GetURL
	I1104 10:38:01.445930   27967 main.go:141] libmachine: (addons-746456) DBG | Using libvirt version 6000000
	I1104 10:38:01.447878   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.448207   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.448237   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.448373   27967 main.go:141] libmachine: Docker is up and running!
	I1104 10:38:01.448387   27967 main.go:141] libmachine: Reticulating splines...
	I1104 10:38:01.448394   27967 client.go:171] duration metric: took 21.975483383s to LocalClient.Create
	I1104 10:38:01.448416   27967 start.go:167] duration metric: took 21.975565515s to libmachine.API.Create "addons-746456"
	I1104 10:38:01.448425   27967 start.go:293] postStartSetup for "addons-746456" (driver="kvm2")
	I1104 10:38:01.448444   27967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:38:01.448459   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.448722   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:38:01.448750   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.450692   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.450971   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.450991   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.451136   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.451290   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.451390   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.451490   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:01.535037   27967 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:38:01.539157   27967 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:38:01.539184   27967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:38:01.539260   27967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:38:01.539289   27967 start.go:296] duration metric: took 90.850997ms for postStartSetup
	I1104 10:38:01.539327   27967 main.go:141] libmachine: (addons-746456) Calling .GetConfigRaw
	I1104 10:38:01.539870   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:01.542539   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.542833   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.542857   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.543087   27967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/config.json ...
	I1104 10:38:01.543252   27967 start.go:128] duration metric: took 22.088404679s to createHost
	I1104 10:38:01.543274   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.545474   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.545712   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.545747   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.545854   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.546025   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.546127   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.546238   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.546374   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:01.546525   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:01.546545   27967 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:38:01.653337   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730716681.627964793
	
	I1104 10:38:01.653364   27967 fix.go:216] guest clock: 1730716681.627964793
	I1104 10:38:01.653374   27967 fix.go:229] Guest: 2024-11-04 10:38:01.627964793 +0000 UTC Remote: 2024-11-04 10:38:01.543264431 +0000 UTC m=+22.193535591 (delta=84.700362ms)
	I1104 10:38:01.653439   27967 fix.go:200] guest clock delta is within tolerance: 84.700362ms
	I1104 10:38:01.653446   27967 start.go:83] releasing machines lock for "addons-746456", held for 22.198667431s
	I1104 10:38:01.653477   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.653741   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:01.656183   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.656615   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.656633   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.656822   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.657265   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.657436   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.657529   27967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:38:01.657574   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.657632   27967 ssh_runner.go:195] Run: cat /version.json
	I1104 10:38:01.657657   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.659910   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660194   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.660230   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660387   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.660390   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660566   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.660699   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.660717   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.660731   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660869   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.660865   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:01.661015   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.661136   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.661324   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:01.737576   27967 ssh_runner.go:195] Run: systemctl --version
	I1104 10:38:01.763213   27967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:38:01.921943   27967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:38:01.927445   27967 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:38:01.927516   27967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:38:01.941997   27967 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:38:01.942023   27967 start.go:495] detecting cgroup driver to use...
	I1104 10:38:01.942090   27967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:38:01.956679   27967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:38:01.969679   27967 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:38:01.969736   27967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:38:01.982626   27967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:38:01.995194   27967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:38:02.112459   27967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:38:02.251760   27967 docker.go:233] disabling docker service ...
	I1104 10:38:02.251838   27967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:38:02.265112   27967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:38:02.277265   27967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:38:02.420894   27967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:38:02.543082   27967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:38:02.556733   27967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:38:02.574799   27967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:38:02.574857   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.584477   27967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:38:02.584546   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.594273   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.603748   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.612996   27967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:38:02.622244   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.631654   27967 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.647004   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.656322   27967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:38:02.664802   27967 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:38:02.664859   27967 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:38:02.675911   27967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:38:02.684891   27967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:38:02.804404   27967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:38:02.886732   27967 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:38:02.886811   27967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:38:02.890976   27967 start.go:563] Will wait 60s for crictl version
	I1104 10:38:02.891042   27967 ssh_runner.go:195] Run: which crictl
	I1104 10:38:02.894408   27967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:38:02.926682   27967 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:38:02.926793   27967 ssh_runner.go:195] Run: crio --version
	I1104 10:38:02.951789   27967 ssh_runner.go:195] Run: crio --version
	I1104 10:38:02.979627   27967 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:38:02.980809   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:02.984143   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:02.984516   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:02.984546   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:02.984700   27967 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:38:02.988379   27967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:38:02.999730   27967 kubeadm.go:883] updating cluster {Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 10:38:02.999851   27967 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:38:02.999906   27967 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:38:03.028235   27967 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 10:38:03.028296   27967 ssh_runner.go:195] Run: which lz4
	I1104 10:38:03.031786   27967 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 10:38:03.035391   27967 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 10:38:03.035432   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 10:38:04.112774   27967 crio.go:462] duration metric: took 1.081023392s to copy over tarball
	I1104 10:38:04.112837   27967 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 10:38:06.183806   27967 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.070941022s)
	I1104 10:38:06.183836   27967 crio.go:469] duration metric: took 2.07103873s to extract the tarball
	I1104 10:38:06.183846   27967 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 10:38:06.219839   27967 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:38:06.260150   27967 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 10:38:06.260177   27967 cache_images.go:84] Images are preloaded, skipping loading
	I1104 10:38:06.260184   27967 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.2 crio true true} ...
	I1104 10:38:06.260308   27967 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-746456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:38:06.260398   27967 ssh_runner.go:195] Run: crio config
	I1104 10:38:06.304511   27967 cni.go:84] Creating CNI manager for ""
	I1104 10:38:06.304535   27967 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:38:06.304545   27967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 10:38:06.304571   27967 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-746456 NodeName:addons-746456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 10:38:06.304715   27967 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-746456"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 10:38:06.304788   27967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:38:06.314319   27967 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 10:38:06.314382   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 10:38:06.323358   27967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1104 10:38:06.338806   27967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:38:06.353567   27967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1104 10:38:06.368096   27967 ssh_runner.go:195] Run: grep 192.168.39.4	control-plane.minikube.internal$ /etc/hosts
	I1104 10:38:06.371617   27967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:38:06.382524   27967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:38:06.508109   27967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:38:06.524650   27967 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456 for IP: 192.168.39.4
	I1104 10:38:06.524676   27967 certs.go:194] generating shared ca certs ...
	I1104 10:38:06.524696   27967 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.524856   27967 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:38:06.648082   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt ...
	I1104 10:38:06.648110   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt: {Name:mkc60cfcc3a05532b876cd4acbbfca8a1c8c1878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.648268   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key ...
	I1104 10:38:06.648279   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key: {Name:mk3ec4fc3b2268fe8854a1415b7cf1496b552554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.648352   27967 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:38:06.718168   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt ...
	I1104 10:38:06.718198   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt: {Name:mke06fb1e1d2874e54d58c110876e45ff172f549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.718339   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key ...
	I1104 10:38:06.718348   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key: {Name:mk2554b1aa340d8e1073dbc7bb4aee16976c2f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.718411   27967 certs.go:256] generating profile certs ...
	I1104 10:38:06.718458   27967 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.key
	I1104 10:38:06.718471   27967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt with IP's: []
	I1104 10:38:07.014113   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt ...
	I1104 10:38:07.014162   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: {Name:mk2dbf6749598cb60b7601bf42ced4198096dc20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.014361   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.key ...
	I1104 10:38:07.014393   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.key: {Name:mkdb90e1f72b7bf0594540208f4780ec280e3769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.014555   27967 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019
	I1104 10:38:07.014598   27967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4]
	I1104 10:38:07.178824   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019 ...
	I1104 10:38:07.178855   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019: {Name:mk9e26a02ded78b5d0e82a92927b64b299da376d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.179038   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019 ...
	I1104 10:38:07.179055   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019: {Name:mk52651da068c7b40180a70d72cffe2b6bf68fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.179161   27967 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt
	I1104 10:38:07.179255   27967 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key
	I1104 10:38:07.179305   27967 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key
	I1104 10:38:07.179322   27967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt with IP's: []
	I1104 10:38:07.540538   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt ...
	I1104 10:38:07.540570   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt: {Name:mk6aa7552ca33368f073a98292a8c7aa53f742b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.540754   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key ...
	I1104 10:38:07.540768   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key: {Name:mk8094241709662feadffcd36b5b489ca95631e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.540962   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:38:07.541000   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:38:07.541021   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:38:07.541040   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:38:07.541620   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:38:07.566108   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:38:07.588247   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:38:07.609550   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:38:07.631722   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1104 10:38:07.653441   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 10:38:07.674061   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:38:07.694539   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:38:07.715805   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:38:07.737173   27967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 10:38:07.752395   27967 ssh_runner.go:195] Run: openssl version
	I1104 10:38:07.757975   27967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:38:07.768235   27967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:38:07.772516   27967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:38:07.772579   27967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:38:07.777968   27967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:38:07.787877   27967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:38:07.791626   27967 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:38:07.791680   27967 kubeadm.go:392] StartCluster: {Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:38:07.791768   27967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 10:38:07.791808   27967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 10:38:07.824226   27967 cri.go:89] found id: ""
	I1104 10:38:07.824299   27967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 10:38:07.833610   27967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 10:38:07.846003   27967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 10:38:07.858921   27967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 10:38:07.858946   27967 kubeadm.go:157] found existing configuration files:
	
	I1104 10:38:07.858995   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 10:38:07.869572   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 10:38:07.869628   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 10:38:07.880432   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 10:38:07.889302   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 10:38:07.889368   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 10:38:07.898007   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 10:38:07.906107   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 10:38:07.906156   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 10:38:07.914542   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 10:38:07.922627   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 10:38:07.922682   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 10:38:07.933698   27967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 10:38:08.106609   27967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 10:38:17.668667   27967 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1104 10:38:17.668742   27967 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 10:38:17.668854   27967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 10:38:17.668981   27967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 10:38:17.669118   27967 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1104 10:38:17.669209   27967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 10:38:17.670831   27967 out.go:235]   - Generating certificates and keys ...
	I1104 10:38:17.670938   27967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 10:38:17.671032   27967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 10:38:17.671139   27967 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 10:38:17.671235   27967 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 10:38:17.671321   27967 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 10:38:17.671402   27967 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 10:38:17.671511   27967 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 10:38:17.671674   27967 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-746456 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I1104 10:38:17.671749   27967 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 10:38:17.671905   27967 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-746456 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I1104 10:38:17.671993   27967 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 10:38:17.672093   27967 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 10:38:17.672185   27967 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 10:38:17.672276   27967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 10:38:17.672355   27967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 10:38:17.672434   27967 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1104 10:38:17.672520   27967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 10:38:17.672582   27967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 10:38:17.672635   27967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 10:38:17.672707   27967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 10:38:17.672766   27967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 10:38:17.674272   27967 out.go:235]   - Booting up control plane ...
	I1104 10:38:17.674364   27967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 10:38:17.674440   27967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 10:38:17.674542   27967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 10:38:17.674699   27967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 10:38:17.674874   27967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 10:38:17.674945   27967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 10:38:17.675091   27967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1104 10:38:17.675257   27967 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1104 10:38:17.675368   27967 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001782742s
	I1104 10:38:17.675465   27967 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1104 10:38:17.675542   27967 kubeadm.go:310] [api-check] The API server is healthy after 5.002028453s
	I1104 10:38:17.675680   27967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1104 10:38:17.675807   27967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1104 10:38:17.675878   27967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1104 10:38:17.676105   27967 kubeadm.go:310] [mark-control-plane] Marking the node addons-746456 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1104 10:38:17.676183   27967 kubeadm.go:310] [bootstrap-token] Using token: hati8t.k5vc0b0z4h6bkmvm
	I1104 10:38:17.678284   27967 out.go:235]   - Configuring RBAC rules ...
	I1104 10:38:17.678410   27967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1104 10:38:17.678508   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1104 10:38:17.678721   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1104 10:38:17.678881   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1104 10:38:17.679000   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1104 10:38:17.679143   27967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1104 10:38:17.679303   27967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1104 10:38:17.679364   27967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1104 10:38:17.679428   27967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1104 10:38:17.679437   27967 kubeadm.go:310] 
	I1104 10:38:17.679511   27967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1104 10:38:17.679520   27967 kubeadm.go:310] 
	I1104 10:38:17.679626   27967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1104 10:38:17.679635   27967 kubeadm.go:310] 
	I1104 10:38:17.679657   27967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1104 10:38:17.679707   27967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1104 10:38:17.679783   27967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1104 10:38:17.679796   27967 kubeadm.go:310] 
	I1104 10:38:17.679874   27967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1104 10:38:17.679887   27967 kubeadm.go:310] 
	I1104 10:38:17.679957   27967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1104 10:38:17.679969   27967 kubeadm.go:310] 
	I1104 10:38:17.680044   27967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1104 10:38:17.680148   27967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1104 10:38:17.680250   27967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1104 10:38:17.680265   27967 kubeadm.go:310] 
	I1104 10:38:17.680384   27967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1104 10:38:17.680506   27967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1104 10:38:17.680519   27967 kubeadm.go:310] 
	I1104 10:38:17.680627   27967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hati8t.k5vc0b0z4h6bkmvm \
	I1104 10:38:17.680781   27967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 \
	I1104 10:38:17.680819   27967 kubeadm.go:310] 	--control-plane 
	I1104 10:38:17.680829   27967 kubeadm.go:310] 
	I1104 10:38:17.680933   27967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1104 10:38:17.680944   27967 kubeadm.go:310] 
	I1104 10:38:17.681057   27967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hati8t.k5vc0b0z4h6bkmvm \
	I1104 10:38:17.681184   27967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 
	I1104 10:38:17.681196   27967 cni.go:84] Creating CNI manager for ""
	I1104 10:38:17.681203   27967 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:38:17.683636   27967 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 10:38:17.684930   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 10:38:17.697356   27967 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 10:38:17.716046   27967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 10:38:17.716195   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:17.716224   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-746456 minikube.k8s.io/updated_at=2024_11_04T10_38_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=addons-746456 minikube.k8s.io/primary=true
	I1104 10:38:17.838363   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:17.838366   27967 ops.go:34] apiserver oom_adj: -16
	I1104 10:38:18.339033   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:18.839016   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:19.339277   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:19.839188   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:20.339076   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:20.839029   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:21.338431   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:21.838926   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:21.914196   27967 kubeadm.go:1113] duration metric: took 4.198038732s to wait for elevateKubeSystemPrivileges
	I1104 10:38:21.914239   27967 kubeadm.go:394] duration metric: took 14.122562515s to StartCluster
	I1104 10:38:21.914261   27967 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:21.914409   27967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:38:21.914766   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:21.914950   27967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1104 10:38:21.914976   27967 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:38:21.915030   27967 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1104 10:38:21.915167   27967 addons.go:69] Setting yakd=true in profile "addons-746456"
	I1104 10:38:21.915173   27967 config.go:182] Loaded profile config "addons-746456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:38:21.915179   27967 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-746456"
	I1104 10:38:21.915199   27967 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-746456"
	I1104 10:38:21.915199   27967 addons.go:69] Setting cloud-spanner=true in profile "addons-746456"
	I1104 10:38:21.915209   27967 addons.go:69] Setting gcp-auth=true in profile "addons-746456"
	I1104 10:38:21.915219   27967 addons.go:234] Setting addon cloud-spanner=true in "addons-746456"
	I1104 10:38:21.915235   27967 addons.go:69] Setting volcano=true in profile "addons-746456"
	I1104 10:38:21.915200   27967 addons.go:234] Setting addon yakd=true in "addons-746456"
	I1104 10:38:21.915245   27967 addons.go:69] Setting storage-provisioner=true in profile "addons-746456"
	I1104 10:38:21.915241   27967 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-746456"
	I1104 10:38:21.915252   27967 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-746456"
	I1104 10:38:21.915256   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915260   27967 addons.go:234] Setting addon storage-provisioner=true in "addons-746456"
	I1104 10:38:21.915262   27967 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-746456"
	I1104 10:38:21.915263   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915258   27967 addons.go:69] Setting volumesnapshots=true in profile "addons-746456"
	I1104 10:38:21.915282   27967 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-746456"
	I1104 10:38:21.915286   27967 addons.go:234] Setting addon volumesnapshots=true in "addons-746456"
	I1104 10:38:21.915288   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915303   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915315   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915225   27967 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-746456"
	I1104 10:38:21.915237   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915345   27967 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-746456"
	I1104 10:38:21.915556   27967 addons.go:69] Setting inspektor-gadget=true in profile "addons-746456"
	I1104 10:38:21.915573   27967 addons.go:234] Setting addon inspektor-gadget=true in "addons-746456"
	I1104 10:38:21.915614   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915170   27967 addons.go:69] Setting default-storageclass=true in profile "addons-746456"
	I1104 10:38:21.915715   27967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-746456"
	I1104 10:38:21.915731   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915730   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915740   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915740   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915731   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915746   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915330   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915755   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915758   27967 addons.go:69] Setting ingress-dns=true in profile "addons-746456"
	I1104 10:38:21.915761   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915247   27967 addons.go:234] Setting addon volcano=true in "addons-746456"
	I1104 10:38:21.915770   27967 addons.go:234] Setting addon ingress-dns=true in "addons-746456"
	I1104 10:38:21.915779   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915789   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915800   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915826   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916064   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916074   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916083   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916098   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916106   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916125   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916133   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916140   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915235   27967 addons.go:69] Setting registry=true in profile "addons-746456"
	I1104 10:38:21.915761   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916163   27967 addons.go:234] Setting addon registry=true in "addons-746456"
	I1104 10:38:21.915746   27967 addons.go:69] Setting ingress=true in profile "addons-746456"
	I1104 10:38:21.916170   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916177   27967 addons.go:234] Setting addon ingress=true in "addons-746456"
	I1104 10:38:21.915227   27967 mustload.go:65] Loading cluster: addons-746456
	I1104 10:38:21.916184   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916186   27967 addons.go:69] Setting metrics-server=true in profile "addons-746456"
	I1104 10:38:21.916197   27967 addons.go:234] Setting addon metrics-server=true in "addons-746456"
	I1104 10:38:21.916210   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916223   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916328   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916488   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.916563   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.916743   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.916898   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916930   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.917115   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.917134   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.917142   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.917157   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.921719   27967 out.go:177] * Verifying Kubernetes components...
	I1104 10:38:21.923510   27967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:38:21.941410   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I1104 10:38:21.941579   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1104 10:38:21.941646   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1104 10:38:21.941709   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I1104 10:38:21.941762   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I1104 10:38:21.941816   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41923
	I1104 10:38:21.942274   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942387   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942448   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942498   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942635   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942703   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942878   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.942896   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943002   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943016   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943108   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943118   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943206   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943231   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943325   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943336   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943378   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.943414   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.943450   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.943787   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.943810   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.944985   27967 config.go:182] Loaded profile config "addons-746456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:38:21.945219   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.945266   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.961540   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.961614   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.961639   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.961733   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.962011   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.962016   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.962038   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.962077   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.962252   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.962267   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.962283   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.964473   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.964519   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.966659   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42965
	I1104 10:38:21.969077   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I1104 10:38:21.969871   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.970357   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.970376   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.970693   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.971245   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.971271   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.972719   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33639
	I1104 10:38:21.973047   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.973526   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.973552   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.974008   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.974532   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.974575   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.981240   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I1104 10:38:21.981597   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.981687   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I1104 10:38:21.982193   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.982469   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.982487   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.982876   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.982951   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1104 10:38:21.983091   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:21.983425   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.983442   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.983812   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.984007   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:21.984941   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.987390   27967 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-746456"
	I1104 10:38:21.987435   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.987825   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.987860   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.988070   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:21.988234   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.988247   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.988959   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.989603   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.989632   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.990273   27967 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1104 10:38:21.991601   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1104 10:38:21.991619   27967 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1104 10:38:21.991640   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:21.995245   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:21.995650   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:21.995678   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:21.995941   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:21.996131   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:21.996297   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:21.996450   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:21.997688   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.997743   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.997975   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.998494   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.998510   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.999149   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.999342   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.000259   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I1104 10:38:22.000927   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.002019   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:22.002427   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.002448   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.002730   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.002744   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.002775   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I1104 10:38:22.003255   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.003274   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.003353   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45673
	I1104 10:38:22.003831   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.003870   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.004075   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.004092   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.004161   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.004685   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.017844   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.017904   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.021397   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I1104 10:38:22.021570   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41671
	I1104 10:38:22.021669   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I1104 10:38:22.021875   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.021888   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.022305   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.022519   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.022613   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.023060   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.023082   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.023205   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.023288   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.023307   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.023623   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.023641   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.023707   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.023866   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.023953   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.024009   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.024872   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.024907   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.025398   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I1104 10:38:22.025397   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.026440   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.026584   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.026598   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
	I1104 10:38:22.026954   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.026990   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.027055   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.027338   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.027974   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.028021   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.028415   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.028487   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.028467   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.028708   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43123
	I1104 10:38:22.029102   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.029122   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.030045   27967 addons.go:234] Setting addon default-storageclass=true in "addons-746456"
	I1104 10:38:22.030088   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:22.030535   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.030578   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.030861   27967 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1104 10:38:22.030925   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.030871   27967 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1104 10:38:22.031415   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.031525   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.032687   27967 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1104 10:38:22.032732   27967 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1104 10:38:22.032761   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.032792   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 10:38:22.032802   27967 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 10:38:22.032827   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.033012   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.033038   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.033380   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.033565   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.034851   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I1104 10:38:22.035344   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.035422   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.035911   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.035930   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.036350   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.036573   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I1104 10:38:22.036715   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I1104 10:38:22.036952   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.037299   27967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 10:38:22.037471   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.037522   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.037590   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.037950   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.038093   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.038647   27967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:38:22.038666   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 10:38:22.038705   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.038821   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.039293   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.039457   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.039476   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.040366   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.040719   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.040752   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.040788   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:22.040796   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:22.040856   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.040882   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.041001   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.041078   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.041125   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:22.041167   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:22.041174   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:22.041181   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:22.041188   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:22.041510   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.041766   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.041922   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.042490   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.042541   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.042741   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.042889   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.043506   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1104 10:38:22.043705   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.043744   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.043891   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.044289   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:22.044305   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	W1104 10:38:22.044391   27967 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1104 10:38:22.045865   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.046332   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.046413   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.046549   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.046672   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.046753   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.046826   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.047068   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1104 10:38:22.047433   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1104 10:38:22.047759   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.048749   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.048764   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.049112   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.049246   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.049472   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1104 10:38:22.050649   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.051935   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1104 10:38:22.051940   27967 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1104 10:38:22.053571   27967 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1104 10:38:22.053589   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1104 10:38:22.053608   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.054977   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1104 10:38:22.055286   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.055910   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I1104 10:38:22.056524   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.056903   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.057255   27967 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1104 10:38:22.057424   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.057746   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.057767   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.057594   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.057808   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.057971   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.058169   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1104 10:38:22.058264   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.058306   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I1104 10:38:22.058858   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.059003   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.059070   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.059111   27967 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1104 10:38:22.059124   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1104 10:38:22.059140   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.059357   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.059528   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.059553   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.060552   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.060748   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.061007   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1104 10:38:22.062128   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.062741   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.063050   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.063354   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.063371   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.063398   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.063458   27967 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1104 10:38:22.063459   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1104 10:38:22.063544   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.064207   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.064333   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.064653   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1104 10:38:22.064878   27967 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1104 10:38:22.064893   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1104 10:38:22.064909   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.065694   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1104 10:38:22.065709   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1104 10:38:22.065726   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.066429   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1104 10:38:22.066445   27967 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1104 10:38:22.066463   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.067793   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41257
	I1104 10:38:22.069332   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.070672   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I1104 10:38:22.070910   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.070928   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.071779   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.071850   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.071922   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.071949   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.072084   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.072299   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.072316   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.072737   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.072792   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I1104 10:38:22.072942   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.073941   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.073954   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.074024   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.074023   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.074038   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074040   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074070   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074073   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.074254   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.074304   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.074430   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.074480   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.074487   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.074515   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074733   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.074732   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.074790   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.074830   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.075067   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.075332   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.075483   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.076463   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I1104 10:38:22.076752   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.076767   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.076816   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.077010   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1104 10:38:22.077386   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.077541   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.077555   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.077960   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.077992   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.078249   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1104 10:38:22.078372   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.078565   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.079143   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.079625   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.079642   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.080017   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.080062   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1104 10:38:22.080179   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.080337   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.080996   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37775
	I1104 10:38:22.081438   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.081848   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.082046   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.082064   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.082379   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.082772   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.083015   27967 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1104 10:38:22.083801   27967 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1104 10:38:22.083833   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1104 10:38:22.084129   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.084667   27967 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1104 10:38:22.084684   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1104 10:38:22.084698   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.086254   27967 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1104 10:38:22.086275   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1104 10:38:22.086291   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.087060   27967 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1104 10:38:22.087681   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.087931   27967 out.go:177]   - Using image docker.io/registry:2.8.3
	I1104 10:38:22.088164   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.088321   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.088385   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.088534   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.088657   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.088781   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.089600   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.089864   27967 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1104 10:38:22.089880   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1104 10:38:22.089894   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.090127   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.090141   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.090334   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.090627   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.090781   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.090917   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.091172   27967 out.go:177]   - Using image docker.io/busybox:stable
	I1104 10:38:22.092547   27967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1104 10:38:22.092566   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1104 10:38:22.092583   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.092900   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.093320   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.093344   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.093489   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.093667   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.093762   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.093860   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.095729   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.096062   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.096082   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.096220   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.096326   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.096484   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.096590   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.099128   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I1104 10:38:22.099468   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.099934   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.099950   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.100284   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.100466   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.101880   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.102115   27967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 10:38:22.102129   27967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 10:38:22.102144   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.105453   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.105915   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.105974   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.106135   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.106268   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.106434   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.106566   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.388670   27967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:38:22.388866   27967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1104 10:38:22.398590   27967 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1104 10:38:22.398611   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1104 10:38:22.453792   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 10:38:22.453817   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1104 10:38:22.455723   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1104 10:38:22.455740   27967 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1104 10:38:22.468989   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1104 10:38:22.469014   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1104 10:38:22.517722   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1104 10:38:22.527473   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1104 10:38:22.547606   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1104 10:38:22.550242   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1104 10:38:22.554589   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 10:38:22.573469   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1104 10:38:22.582104   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1104 10:38:22.595927   27967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1104 10:38:22.595948   27967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1104 10:38:22.627182   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:38:22.637254   27967 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1104 10:38:22.637277   27967 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1104 10:38:22.657320   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1104 10:38:22.657346   27967 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1104 10:38:22.673651   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 10:38:22.673685   27967 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 10:38:22.676060   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1104 10:38:22.676081   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1104 10:38:22.707920   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1104 10:38:22.789617   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1104 10:38:22.789645   27967 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1104 10:38:22.791851   27967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1104 10:38:22.791873   27967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1104 10:38:22.838535   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 10:38:22.838561   27967 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 10:38:22.914776   27967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1104 10:38:22.914806   27967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1104 10:38:22.915813   27967 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1104 10:38:22.915834   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1104 10:38:22.943257   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1104 10:38:22.943284   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1104 10:38:22.984815   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1104 10:38:22.984836   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1104 10:38:23.042920   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 10:38:23.043895   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1104 10:38:23.043916   27967 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1104 10:38:23.104575   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1104 10:38:23.170589   27967 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1104 10:38:23.170618   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1104 10:38:23.185105   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1104 10:38:23.219186   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1104 10:38:23.219225   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1104 10:38:23.336819   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1104 10:38:23.336849   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1104 10:38:23.430474   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1104 10:38:23.578649   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1104 10:38:23.578671   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1104 10:38:23.942247   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1104 10:38:23.942274   27967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1104 10:38:24.302791   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1104 10:38:24.302815   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1104 10:38:24.575251   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1104 10:38:24.575278   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1104 10:38:24.754149   27967 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.365442902s)
	I1104 10:38:24.754213   27967 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.365319916s)
	I1104 10:38:24.754245   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.236492941s)
	I1104 10:38:24.754239   27967 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1104 10:38:24.754281   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:24.754293   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:24.754589   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:24.754632   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:24.754653   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:24.754665   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:24.755197   27967 node_ready.go:35] waiting up to 6m0s for node "addons-746456" to be "Ready" ...
	I1104 10:38:24.755400   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:24.755413   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:24.755438   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:24.760211   27967 node_ready.go:49] node "addons-746456" has status "Ready":"True"
	I1104 10:38:24.760235   27967 node_ready.go:38] duration metric: took 5.018397ms for node "addons-746456" to be "Ready" ...
	I1104 10:38:24.760245   27967 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:38:24.770168   27967 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:24.827056   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1104 10:38:24.827086   27967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1104 10:38:25.102897   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1104 10:38:25.259969   27967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-746456" context rescaled to 1 replicas
	I1104 10:38:25.471016   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.943503689s)
	I1104 10:38:25.471348   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.471379   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.471680   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.471704   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.471714   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.471723   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.471938   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.471954   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.574460   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.024189029s)
	I1104 10:38:25.574495   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.019886591s)
	I1104 10:38:25.574510   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574523   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574535   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574547   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574456   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.026792971s)
	I1104 10:38:25.574606   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574622   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574926   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.574951   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.574951   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.574962   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.574973   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.574974   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.574981   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574986   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.574990   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574995   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.575013   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.575059   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.575088   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.575119   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.575126   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.575184   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.575202   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.575230   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.575405   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.575417   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.576742   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.576756   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.673762   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.673790   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.674056   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.674073   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:26.963776   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:27.280067   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.706559624s)
	I1104 10:38:27.280128   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.280141   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.280384   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.280427   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.280439   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.280447   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.280408   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:27.280640   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.280660   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.280662   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:27.404540   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.82238513s)
	I1104 10:38:27.404599   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.404611   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.404662   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.777433817s)
	I1104 10:38:27.404726   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.404740   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.404941   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.404956   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.404964   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.404971   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.405048   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.405058   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.405071   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.405084   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.405143   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:27.405180   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.405191   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.405270   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.405278   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.553684   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.553714   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.553992   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.554027   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.554012   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:29.127059   27967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1104 10:38:29.127100   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:29.130388   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.130849   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:29.130874   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.131102   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:29.131375   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:29.131556   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:29.131711   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:29.348654   27967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1104 10:38:29.479550   27967 addons.go:234] Setting addon gcp-auth=true in "addons-746456"
	I1104 10:38:29.479603   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:29.480016   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:29.480050   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:29.493600   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:29.495833   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I1104 10:38:29.496335   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:29.496775   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:29.496794   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:29.497217   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:29.497816   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:29.497857   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:29.512793   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I1104 10:38:29.513278   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:29.513759   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:29.513783   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:29.514084   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:29.514255   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:29.515840   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:29.516044   27967 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1104 10:38:29.516063   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:29.518734   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.519100   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:29.519125   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.519268   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:29.519490   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:29.519617   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:29.519777   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:30.304546   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.596586448s)
	I1104 10:38:30.304594   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304605   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304608   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.261651532s)
	I1104 10:38:30.304645   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304667   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304705   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.119546607s)
	I1104 10:38:30.304654   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.200043561s)
	I1104 10:38:30.304737   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304737   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304745   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304748   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304858   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.874343579s)
	W1104 10:38:30.304893   27967 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1104 10:38:30.304913   27967 retry.go:31] will retry after 295.173531ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1104 10:38:30.305050   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305056   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305058   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305070   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305072   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305079   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305083   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305085   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305086   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305095   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305103   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305110   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305110   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305125   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305125   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305135   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305168   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305180   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305186   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305199   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305463   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305479   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305488   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305493   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305497   27967 addons.go:475] Verifying addon registry=true in "addons-746456"
	I1104 10:38:30.305515   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305520   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305526   27967 addons.go:475] Verifying addon metrics-server=true in "addons-746456"
	I1104 10:38:30.305592   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305601   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305481   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305560   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.307439   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.307450   27967 addons.go:475] Verifying addon ingress=true in "addons-746456"
	I1104 10:38:30.305574   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.307869   27967 out.go:177] * Verifying registry addon...
	I1104 10:38:30.307867   27967 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-746456 service yakd-dashboard -n yakd-dashboard
	
	I1104 10:38:30.309006   27967 out.go:177] * Verifying ingress addon...
	I1104 10:38:30.310182   27967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1104 10:38:30.311459   27967 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1104 10:38:30.345538   27967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1104 10:38:30.345560   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:30.346157   27967 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1104 10:38:30.346181   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:30.600262   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1104 10:38:30.826962   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:30.827118   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:30.851348   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.748405797s)
	I1104 10:38:30.851395   27967 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.335329105s)
	I1104 10:38:30.851402   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.851416   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.851724   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.851758   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.851767   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.851775   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.851785   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.852015   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.852029   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.852038   27967 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-746456"
	I1104 10:38:30.852847   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1104 10:38:30.853728   27967 out.go:177] * Verifying csi-hostpath-driver addon...
	I1104 10:38:30.855649   27967 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1104 10:38:30.856708   27967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1104 10:38:30.857056   27967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1104 10:38:30.857071   27967 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1104 10:38:30.886780   27967 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1104 10:38:30.886800   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:30.963086   27967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1104 10:38:30.963114   27967 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1104 10:38:31.039711   27967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1104 10:38:31.039740   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1104 10:38:31.086834   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1104 10:38:31.315487   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:31.316260   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:31.361772   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:31.776007   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:31.815114   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:31.815572   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:31.861450   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:32.349204   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:32.350157   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:32.418902   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:32.632721   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.032388557s)
	I1104 10:38:32.632783   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.632793   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.545924136s)
	I1104 10:38:32.632836   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.632802   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.632856   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.633189   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.633207   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.633238   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.633249   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.633290   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:32.633327   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.633344   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.633359   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.633368   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.634736   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:32.634744   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:32.634762   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.634774   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.634742   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.634845   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.637764   27967 addons.go:475] Verifying addon gcp-auth=true in "addons-746456"
	I1104 10:38:32.640411   27967 out.go:177] * Verifying gcp-auth addon...
	I1104 10:38:32.642429   27967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1104 10:38:32.645805   27967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1104 10:38:32.645821   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:32.815767   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:32.816238   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:32.861468   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:33.146052   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:33.318523   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:33.318761   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:33.364934   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:33.646814   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:33.814842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:33.818462   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:33.861363   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:34.145838   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:34.276011   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:34.314925   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:34.316042   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:34.361848   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:34.645573   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:34.814946   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:34.815267   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:34.861791   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:35.145309   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:35.315014   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:35.315759   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:35.361023   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:35.645836   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:35.813871   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:35.816048   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:35.861154   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:36.145678   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:36.315082   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:36.315549   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:36.360965   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:36.646307   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:37.161190   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:37.161260   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:37.162075   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:37.162199   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:37.162740   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:37.323978   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:37.324129   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:37.361435   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:37.647008   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:37.815445   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:37.815739   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:37.861015   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:38.147128   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:38.314167   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:38.315929   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:38.361294   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:38.645639   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:38.815335   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:38.815757   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:38.861587   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:39.145957   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:39.278864   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:39.316221   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:39.316675   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:39.361258   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:39.645753   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:39.814544   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:39.816608   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:39.861834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:40.146481   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:40.405421   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:40.405978   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:40.407279   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:40.646103   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:40.814047   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:40.816540   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:40.861150   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:41.145915   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:41.315530   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:41.316265   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:41.361510   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:41.645706   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:42.066102   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:42.066491   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:42.067551   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:42.071354   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:42.145443   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:42.314734   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:42.315348   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:42.360265   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:42.651277   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:42.814782   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:42.815860   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:42.861142   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:43.146740   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:43.315647   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:43.316007   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:43.362006   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:43.646598   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:43.813334   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:43.815354   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:43.861355   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:44.145698   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:44.276426   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:44.313477   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:44.315901   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:44.361245   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:44.646663   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:44.813903   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:44.816644   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:44.861510   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:45.146198   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:45.315376   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:45.315959   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:45.361450   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:45.645910   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:45.813641   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:45.815235   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:45.862849   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:46.146404   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:46.275375   27967 pod_ready.go:93] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.275410   27967 pod_ready.go:82] duration metric: took 21.505206172s for pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.275421   27967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.277374   27967 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gx67b" not found
	I1104 10:38:46.277391   27967 pod_ready.go:82] duration metric: took 1.964714ms for pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace to be "Ready" ...
	E1104 10:38:46.277400   27967 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gx67b" not found
	I1104 10:38:46.277406   27967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hwwcg" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.281385   27967 pod_ready.go:93] pod "coredns-7c65d6cfc9-hwwcg" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.281399   27967 pod_ready.go:82] duration metric: took 3.987491ms for pod "coredns-7c65d6cfc9-hwwcg" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.281413   27967 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.285087   27967 pod_ready.go:93] pod "etcd-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.285104   27967 pod_ready.go:82] duration metric: took 3.684962ms for pod "etcd-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.285111   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.289184   27967 pod_ready.go:93] pod "kube-apiserver-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.289266   27967 pod_ready.go:82] duration metric: took 4.146975ms for pod "kube-apiserver-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.289287   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.313655   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:46.314697   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:46.360986   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:46.479923   27967 pod_ready.go:93] pod "kube-controller-manager-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.479948   27967 pod_ready.go:82] duration metric: took 190.642695ms for pod "kube-controller-manager-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.479961   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s6v2l" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.645873   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:46.816131   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:46.816735   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:46.861334   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:46.872878   27967 pod_ready.go:93] pod "kube-proxy-s6v2l" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.872909   27967 pod_ready.go:82] duration metric: took 392.939415ms for pod "kube-proxy-s6v2l" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.872922   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.146711   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:47.273100   27967 pod_ready.go:93] pod "kube-scheduler-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:47.273126   27967 pod_ready.go:82] duration metric: took 400.195745ms for pod "kube-scheduler-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.273139   27967 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-646xz" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.314050   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:47.315594   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:47.361170   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:47.645575   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:47.673800   27967 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-646xz" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:47.673823   27967 pod_ready.go:82] duration metric: took 400.675069ms for pod "nvidia-device-plugin-daemonset-646xz" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.673834   27967 pod_ready.go:39] duration metric: took 22.913576674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:38:47.673877   27967 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:38:47.673930   27967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:38:47.709237   27967 api_server.go:72] duration metric: took 25.794219874s to wait for apiserver process to appear ...
	I1104 10:38:47.709263   27967 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:38:47.709285   27967 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1104 10:38:47.713118   27967 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I1104 10:38:47.714082   27967 api_server.go:141] control plane version: v1.31.2
	I1104 10:38:47.714100   27967 api_server.go:131] duration metric: took 4.831792ms to wait for apiserver health ...
	I1104 10:38:47.714107   27967 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:38:47.815357   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:47.816047   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:47.861834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:47.880010   27967 system_pods.go:59] 18 kube-system pods found
	I1104 10:38:47.880037   27967 system_pods.go:61] "amd-gpu-device-plugin-g59mv" [b0defe51-9739-4bbe-b65b-2b4cf8941f5a] Running
	I1104 10:38:47.880043   27967 system_pods.go:61] "coredns-7c65d6cfc9-hwwcg" [82ce98e6-792d-4cf2-80a3-e2e59fd840a1] Running
	I1104 10:38:47.880050   27967 system_pods.go:61] "csi-hostpath-attacher-0" [aba9d3ac-9e13-4702-af54-df0b53064a49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1104 10:38:47.880056   27967 system_pods.go:61] "csi-hostpath-resizer-0" [c7a256af-f053-46a8-99b2-44e43137ec86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1104 10:38:47.880064   27967 system_pods.go:61] "csi-hostpathplugin-jrm6t" [57cc4546-427d-4949-9fc9-3e6dac0b0fd8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1104 10:38:47.880069   27967 system_pods.go:61] "etcd-addons-746456" [1a2daee5-f509-4af0-a1bc-ad20c18ae356] Running
	I1104 10:38:47.880073   27967 system_pods.go:61] "kube-apiserver-addons-746456" [db2cdd30-8040-4f66-838b-80c258b94cbe] Running
	I1104 10:38:47.880077   27967 system_pods.go:61] "kube-controller-manager-addons-746456" [3546adf7-6f14-40e8-96a9-9d8f35428855] Running
	I1104 10:38:47.880087   27967 system_pods.go:61] "kube-ingress-dns-minikube" [34b1a1a6-34cd-43a0-a688-fd9bfcab67c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1104 10:38:47.880093   27967 system_pods.go:61] "kube-proxy-s6v2l" [db7c73f6-c992-4a9f-bab4-299ffd389484] Running
	I1104 10:38:47.880102   27967 system_pods.go:61] "kube-scheduler-addons-746456" [9efc1274-1eb2-4904-a322-6ab4a661222d] Running
	I1104 10:38:47.880109   27967 system_pods.go:61] "metrics-server-84c5f94fbc-7c9jd" [c431d0a4-e34e-4f14-a95d-3223d4486d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 10:38:47.880114   27967 system_pods.go:61] "nvidia-device-plugin-daemonset-646xz" [2de93991-ff75-4ba5-814e-4fbe32bd9b24] Running
	I1104 10:38:47.880122   27967 system_pods.go:61] "registry-66c9cd494c-gh6ft" [8fa29892-d576-414b-9dbb-a78812ace5fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1104 10:38:47.880132   27967 system_pods.go:61] "registry-proxy-r9qc2" [f8e1cbae-d518-45fa-8228-27e32339f030] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1104 10:38:47.880141   27967 system_pods.go:61] "snapshot-controller-56fcc65765-4l5bn" [8ae86336-7bdb-4245-895c-34b46444de04] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:47.880151   27967 system_pods.go:61] "snapshot-controller-56fcc65765-5dbpr" [3e7db880-cb20-42ce-9854-c64a11ee5a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:47.880160   27967 system_pods.go:61] "storage-provisioner" [c7696953-ca67-4d3c-a7ba-6a6538b9589a] Running
	I1104 10:38:47.880169   27967 system_pods.go:74] duration metric: took 166.056255ms to wait for pod list to return data ...
	I1104 10:38:47.880181   27967 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:38:48.073371   27967 default_sa.go:45] found service account: "default"
	I1104 10:38:48.073394   27967 default_sa.go:55] duration metric: took 193.207402ms for default service account to be created ...
	I1104 10:38:48.073402   27967 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:38:48.146857   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:48.279838   27967 system_pods.go:86] 18 kube-system pods found
	I1104 10:38:48.279866   27967 system_pods.go:89] "amd-gpu-device-plugin-g59mv" [b0defe51-9739-4bbe-b65b-2b4cf8941f5a] Running
	I1104 10:38:48.279872   27967 system_pods.go:89] "coredns-7c65d6cfc9-hwwcg" [82ce98e6-792d-4cf2-80a3-e2e59fd840a1] Running
	I1104 10:38:48.279880   27967 system_pods.go:89] "csi-hostpath-attacher-0" [aba9d3ac-9e13-4702-af54-df0b53064a49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1104 10:38:48.279887   27967 system_pods.go:89] "csi-hostpath-resizer-0" [c7a256af-f053-46a8-99b2-44e43137ec86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1104 10:38:48.279894   27967 system_pods.go:89] "csi-hostpathplugin-jrm6t" [57cc4546-427d-4949-9fc9-3e6dac0b0fd8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1104 10:38:48.279951   27967 system_pods.go:89] "etcd-addons-746456" [1a2daee5-f509-4af0-a1bc-ad20c18ae356] Running
	I1104 10:38:48.279977   27967 system_pods.go:89] "kube-apiserver-addons-746456" [db2cdd30-8040-4f66-838b-80c258b94cbe] Running
	I1104 10:38:48.279983   27967 system_pods.go:89] "kube-controller-manager-addons-746456" [3546adf7-6f14-40e8-96a9-9d8f35428855] Running
	I1104 10:38:48.279994   27967 system_pods.go:89] "kube-ingress-dns-minikube" [34b1a1a6-34cd-43a0-a688-fd9bfcab67c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1104 10:38:48.280001   27967 system_pods.go:89] "kube-proxy-s6v2l" [db7c73f6-c992-4a9f-bab4-299ffd389484] Running
	I1104 10:38:48.280006   27967 system_pods.go:89] "kube-scheduler-addons-746456" [9efc1274-1eb2-4904-a322-6ab4a661222d] Running
	I1104 10:38:48.280012   27967 system_pods.go:89] "metrics-server-84c5f94fbc-7c9jd" [c431d0a4-e34e-4f14-a95d-3223d4486d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 10:38:48.280019   27967 system_pods.go:89] "nvidia-device-plugin-daemonset-646xz" [2de93991-ff75-4ba5-814e-4fbe32bd9b24] Running
	I1104 10:38:48.280027   27967 system_pods.go:89] "registry-66c9cd494c-gh6ft" [8fa29892-d576-414b-9dbb-a78812ace5fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1104 10:38:48.280037   27967 system_pods.go:89] "registry-proxy-r9qc2" [f8e1cbae-d518-45fa-8228-27e32339f030] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1104 10:38:48.280050   27967 system_pods.go:89] "snapshot-controller-56fcc65765-4l5bn" [8ae86336-7bdb-4245-895c-34b46444de04] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:48.280062   27967 system_pods.go:89] "snapshot-controller-56fcc65765-5dbpr" [3e7db880-cb20-42ce-9854-c64a11ee5a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:48.280072   27967 system_pods.go:89] "storage-provisioner" [c7696953-ca67-4d3c-a7ba-6a6538b9589a] Running
	I1104 10:38:48.280084   27967 system_pods.go:126] duration metric: took 206.676825ms to wait for k8s-apps to be running ...
	I1104 10:38:48.280095   27967 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:38:48.280142   27967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:38:48.300493   27967 system_svc.go:56] duration metric: took 20.388134ms WaitForService to wait for kubelet
	I1104 10:38:48.300524   27967 kubeadm.go:582] duration metric: took 26.385522166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:38:48.300540   27967 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:38:48.313999   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:48.316348   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:48.360363   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:48.477053   27967 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:38:48.477126   27967 node_conditions.go:123] node cpu capacity is 2
	I1104 10:38:48.477147   27967 node_conditions.go:105] duration metric: took 176.601598ms to run NodePressure ...
	I1104 10:38:48.477162   27967 start.go:241] waiting for startup goroutines ...
	I1104 10:38:48.647417   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:48.813256   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:48.815488   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:48.861009   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:49.146731   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:49.313792   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:49.315113   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:49.667894   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:49.669454   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:49.813914   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:49.815637   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:49.861589   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:50.145477   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:50.313539   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:50.315480   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:50.361058   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:50.646192   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:50.815860   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:50.816533   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:50.861140   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:51.145296   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:51.317687   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:51.318577   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:51.362834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:51.646818   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:51.815337   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:51.815416   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:51.860922   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:52.147402   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:52.318240   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:52.318764   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:52.361386   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:52.645660   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:52.815491   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:52.815578   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:52.860542   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:53.146641   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:53.313640   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:53.314927   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:53.361119   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:53.648301   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:53.814842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:53.816604   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:53.860806   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:54.146312   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:54.315244   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:54.315400   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:54.361278   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:54.646675   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:54.816853   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:54.817978   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:54.864835   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:55.147073   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:55.314032   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:55.315590   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:55.361378   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:55.645834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:55.816982   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:55.817008   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:55.861337   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:56.146886   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:56.315745   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:56.315769   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:56.361745   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:56.646634   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:56.814853   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:56.816102   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:56.861793   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:57.146977   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:57.315604   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:57.315756   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:57.360809   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:57.646185   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:57.815638   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:57.817025   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:57.861602   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:58.146468   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:58.313704   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:58.315785   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:58.361668   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:58.647726   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:58.813831   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:58.815668   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:58.861244   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:59.146435   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:59.313863   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:59.318161   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:59.361355   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:59.646288   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:59.816000   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:59.816187   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:59.861268   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:00.145602   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:00.314073   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:00.315390   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:00.361549   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:00.646089   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:00.814339   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:00.815898   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:00.861015   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:01.145485   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:01.313751   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:01.316482   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:01.360943   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:01.646864   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:01.814950   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:01.814957   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:01.861238   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:02.146239   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:02.316877   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:02.316887   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:02.361820   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:02.646740   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:02.815054   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:02.816500   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:02.861500   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:03.146400   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:03.314470   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:03.316431   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:03.361523   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:03.646118   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:03.816323   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:03.816673   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:03.860921   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:04.146227   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:04.315905   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:04.316023   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:04.362266   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:04.646673   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:04.813901   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:04.816412   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:04.861936   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:05.145682   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:05.328783   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:05.329268   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:05.361783   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:05.645866   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:06.109322   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:06.109936   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:06.111944   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:06.206947   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:06.316258   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:06.316608   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:06.417754   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:06.646104   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:06.813357   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:06.815456   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:06.860868   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:07.147889   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:07.316784   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:07.317219   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:07.363630   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:07.645931   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:07.816175   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:07.816240   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:07.862154   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:08.218990   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:08.314560   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:08.315623   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:08.364481   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:08.645411   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:08.813620   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:08.816475   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:08.861304   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:09.145710   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:09.314161   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:09.315799   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:09.360811   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:09.646105   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:09.814596   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:09.815743   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:09.861298   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:10.145948   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:10.314383   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:10.315764   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:10.361146   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:10.647145   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:10.814123   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:10.815526   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:10.860532   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:11.146612   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:11.315252   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:11.316452   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:11.360463   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:11.645644   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:11.820259   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:11.820337   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:11.862158   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:12.146736   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:12.315165   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:12.315393   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:12.360486   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:12.645944   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:12.813901   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:12.815319   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:12.861356   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:13.145543   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:13.314264   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:13.316036   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:13.361770   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:13.646094   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:13.816490   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:13.817315   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:13.916754   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:14.145827   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:14.314639   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:14.315580   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:14.362250   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:14.647075   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:14.814623   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:14.815772   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:14.861549   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:15.146090   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:15.314395   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:15.317168   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:15.362904   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:15.646444   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:15.815488   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:15.816067   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:15.861558   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:16.145951   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:16.315254   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:16.315913   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:16.361888   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:16.645805   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:16.813903   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:16.816031   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:16.861377   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:17.146815   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:17.318842   27967 kapi.go:107] duration metric: took 47.008658121s to wait for kubernetes.io/minikube-addons=registry ...
	I1104 10:39:17.319015   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:17.361872   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:17.646265   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:17.815780   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:17.860848   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:18.146992   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:18.315654   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:18.545545   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:18.650873   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:18.817320   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:18.862390   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:19.146721   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:19.318091   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:19.420045   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:19.645527   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:19.816110   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:19.862538   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:20.146568   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:20.316355   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:20.641153   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:20.646880   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:20.815534   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:20.917063   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:21.146307   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:21.316803   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:21.369895   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:21.646393   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:21.816281   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:21.861934   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:22.146082   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:22.315899   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:22.362428   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:22.646250   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:22.816663   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:22.860980   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:23.146923   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:23.315610   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:23.360751   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:23.649189   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:23.816026   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:23.861338   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:24.147538   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:24.316487   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:24.362660   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:24.646304   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:24.815786   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:24.860876   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:25.146267   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:25.316228   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:25.364023   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:25.646989   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:25.817034   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:25.921582   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:26.146709   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:26.315133   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:26.362005   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:26.646116   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:26.816057   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:26.862035   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:27.147062   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:27.316095   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:27.361493   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:27.645751   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:27.816655   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:27.860720   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:28.145891   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:28.315400   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:28.361276   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:28.646309   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:28.816082   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:28.861141   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:29.147299   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:29.315839   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:29.361759   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:29.647858   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:29.816888   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:29.861906   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:30.146197   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:30.315964   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:30.361298   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:30.645763   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:30.815779   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:30.860681   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:31.147021   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:31.315394   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:31.360780   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:31.645977   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:31.815551   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:31.860650   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:32.146425   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:32.316360   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:32.362355   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:32.645941   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:32.817609   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:32.923975   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:33.146710   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:33.315626   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:33.360507   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:33.646092   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:33.816510   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:33.861576   27967 kapi.go:107] duration metric: took 1m3.00486764s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1104 10:39:34.146565   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:34.315301   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:34.646171   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:34.816489   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:35.145599   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:35.316430   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:35.647212   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:35.818133   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:36.147956   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:36.316202   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:36.648619   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:36.815719   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:37.147189   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:37.316697   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:37.646856   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:37.815139   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:38.145342   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:38.315828   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:38.646561   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:38.818050   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:39.146360   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:39.316266   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:39.645842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:39.815179   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:40.145937   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:40.315453   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:40.646893   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:40.815300   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:41.229036   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:41.315255   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:41.649342   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:41.816454   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:42.146902   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:42.316575   27967 kapi.go:107] duration metric: took 1m12.005114963s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1104 10:39:42.646190   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:43.146842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:43.646543   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:44.146001   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:44.645491   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:45.146499   27967 kapi.go:107] duration metric: took 1m12.504065259s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1104 10:39:45.148256   27967 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-746456 cluster.
	I1104 10:39:45.149641   27967 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1104 10:39:45.150768   27967 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1104 10:39:45.152077   27967 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1104 10:39:45.153248   27967 addons.go:510] duration metric: took 1m23.238217592s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns amd-gpu-device-plugin default-storageclass inspektor-gadget storage-provisioner storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1104 10:39:45.153293   27967 start.go:246] waiting for cluster config update ...
	I1104 10:39:45.153311   27967 start.go:255] writing updated cluster config ...
	I1104 10:39:45.153555   27967 ssh_runner.go:195] Run: rm -f paused
	I1104 10:39:45.204906   27967 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 10:39:45.206645   27967 out.go:177] * Done! kubectl is now configured to use "addons-746456" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.240199323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6fb1bca-0f55-414e-827a-d8e72603a61a name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.241023425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-811a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55427e41c205252b78e05235b9253feb0354799dca1f8c532129f78b6980a3e0,PodSandboxId:ca45cc89b4c660747b69876cbd471c69871b18f5e842ae19068ad669c748d77d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730716781355027089,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-x2rwk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d9d84ec-4a0d-4f6c-8289-0fbc2143768d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1dcb05d25a2df9d5fc3c6f14f8248c00efd17f612e50475d5165fdd3789348fd,PodSandboxId:fe78ff6d940e546965b7b91adaaf18ad06fe6d06214bf895921348da2c5a4a8e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765654467332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mpkx9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 08863e3a-9245-4b36-a15f-1e29e2ecbaae,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0244a78608844ab2da31ca98b4e417fac487c500d58329905de812e09ba63fb,PodSandboxId:fd2026ddcb7426a3a0b08c185a2e40296fb91b084e5d5814be51f0502c7462b2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765532550444,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5k74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e83a6d2-4d3b-44da-92b3-c492cb9163c2,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5da218876a65635d2e1bdd5cc8048de9ad4762342998262bb1f9daf55cf6b45,PodSandboxId:9494341bee1d45170d0d1e2f530acd60b5b4fb4c241da2135b4b4f56047eebc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730716731959033070,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34b1a1a6-34cd-43a0-a688-fd9bfcab67c4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId
:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16
d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737
466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6fb1bca-0f55-414e-827a-d8e72603a61a name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.279283633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=798e78b4-f04c-418f-8f2a-96875f1be15d name=/runtime.v1.RuntimeService/Version
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.279553162Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=798e78b4-f04c-418f-8f2a-96875f1be15d name=/runtime.v1.RuntimeService/Version
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.280481128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2cc9a6f-bf91-4f1b-a640-46ac81178f21 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.281583730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730716965281556360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594744,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2cc9a6f-bf91-4f1b-a640-46ac81178f21 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.282048311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd304b0c-eb15-44a4-8de8-f75f60164b4d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.282110252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd304b0c-eb15-44a4-8de8-f75f60164b4d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.282451317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-811a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55427e41c205252b78e05235b9253feb0354799dca1f8c532129f78b6980a3e0,PodSandboxId:ca45cc89b4c660747b69876cbd471c69871b18f5e842ae19068ad669c748d77d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730716781355027089,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-x2rwk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d9d84ec-4a0d-4f6c-8289-0fbc2143768d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1dcb05d25a2df9d5fc3c6f14f8248c00efd17f612e50475d5165fdd3789348fd,PodSandboxId:fe78ff6d940e546965b7b91adaaf18ad06fe6d06214bf895921348da2c5a4a8e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765654467332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mpkx9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 08863e3a-9245-4b36-a15f-1e29e2ecbaae,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0244a78608844ab2da31ca98b4e417fac487c500d58329905de812e09ba63fb,PodSandboxId:fd2026ddcb7426a3a0b08c185a2e40296fb91b084e5d5814be51f0502c7462b2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765532550444,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5k74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e83a6d2-4d3b-44da-92b3-c492cb9163c2,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5da218876a65635d2e1bdd5cc8048de9ad4762342998262bb1f9daf55cf6b45,PodSandboxId:9494341bee1d45170d0d1e2f530acd60b5b4fb4c241da2135b4b4f56047eebc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730716731959033070,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34b1a1a6-34cd-43a0-a688-fd9bfcab67c4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId
:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16
d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737
466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd304b0c-eb15-44a4-8de8-f75f60164b4d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.289928576Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.290168752Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.315073011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=469463b3-4c05-4f74-b1b9-efab332dfb19 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.315158496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=469463b3-4c05-4f74-b1b9-efab332dfb19 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.316142642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9add6054-5f2c-4475-881f-750bc15d4db6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.317276945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730716965317244196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594744,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9add6054-5f2c-4475-881f-750bc15d4db6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.317806214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f140044-62a9-4a15-90ed-667ae801ecf4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.317859419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f140044-62a9-4a15-90ed-667ae801ecf4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.318169962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-811a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55427e41c205252b78e05235b9253feb0354799dca1f8c532129f78b6980a3e0,PodSandboxId:ca45cc89b4c660747b69876cbd471c69871b18f5e842ae19068ad669c748d77d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730716781355027089,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-x2rwk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d9d84ec-4a0d-4f6c-8289-0fbc2143768d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1dcb05d25a2df9d5fc3c6f14f8248c00efd17f612e50475d5165fdd3789348fd,PodSandboxId:fe78ff6d940e546965b7b91adaaf18ad06fe6d06214bf895921348da2c5a4a8e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765654467332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mpkx9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 08863e3a-9245-4b36-a15f-1e29e2ecbaae,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0244a78608844ab2da31ca98b4e417fac487c500d58329905de812e09ba63fb,PodSandboxId:fd2026ddcb7426a3a0b08c185a2e40296fb91b084e5d5814be51f0502c7462b2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765532550444,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5k74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e83a6d2-4d3b-44da-92b3-c492cb9163c2,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5da218876a65635d2e1bdd5cc8048de9ad4762342998262bb1f9daf55cf6b45,PodSandboxId:9494341bee1d45170d0d1e2f530acd60b5b4fb4c241da2135b4b4f56047eebc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730716731959033070,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34b1a1a6-34cd-43a0-a688-fd9bfcab67c4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId
:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16
d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737
466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f140044-62a9-4a15-90ed-667ae801ecf4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.354332877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05dacac3-74c9-4f09-ba34-94115b3bb6df name=/runtime.v1.RuntimeService/Version
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.354407542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05dacac3-74c9-4f09-ba34-94115b3bb6df name=/runtime.v1.RuntimeService/Version
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.355781117Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=309f7c2a-76c5-4385-8c7b-6d5484929da5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.356972829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730716965356944762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594744,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=309f7c2a-76c5-4385-8c7b-6d5484929da5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.357618873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ebde314-316a-47ad-98ff-ee6fab8f40df name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.357697089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ebde314-316a-47ad-98ff-ee6fab8f40df name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:42:45 addons-746456 crio[654]: time="2024-11-04 10:42:45.357995562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-811a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55427e41c205252b78e05235b9253feb0354799dca1f8c532129f78b6980a3e0,PodSandboxId:ca45cc89b4c660747b69876cbd471c69871b18f5e842ae19068ad669c748d77d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730716781355027089,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-x2rwk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d9d84ec-4a0d-4f6c-8289-0fbc2143768d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1dcb05d25a2df9d5fc3c6f14f8248c00efd17f612e50475d5165fdd3789348fd,PodSandboxId:fe78ff6d940e546965b7b91adaaf18ad06fe6d06214bf895921348da2c5a4a8e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765654467332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mpkx9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 08863e3a-9245-4b36-a15f-1e29e2ecbaae,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0244a78608844ab2da31ca98b4e417fac487c500d58329905de812e09ba63fb,PodSandboxId:fd2026ddcb7426a3a0b08c185a2e40296fb91b084e5d5814be51f0502c7462b2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730716765532550444,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5k74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5e83a6d2-4d3b-44da-92b3-c492cb9163c2,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5da218876a65635d2e1bdd5cc8048de9ad4762342998262bb1f9daf55cf6b45,PodSandboxId:9494341bee1d45170d0d1e2f530acd60b5b4fb4c241da2135b4b4f56047eebc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730716731959033070,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34b1a1a6-34cd-43a0-a688-fd9bfcab67c4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId
:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16
d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737
466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ebde314-316a-47ad-98ff-ee6fab8f40df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c8247b219665       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   07d9887c045b7       nginx
	22a0933d1011d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   2b901bc38beda       busybox
	55427e41c2052       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   ca45cc89b4c66       ingress-nginx-controller-5f85ff4588-x2rwk
	1dcb05d25a2df       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   fe78ff6d940e5       ingress-nginx-admission-patch-mpkx9
	c0244a7860884       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   fd2026ddcb742       ingress-nginx-admission-create-d5k74
	7ca0ab3f19cac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        3 minutes ago       Running             metrics-server            0                   7441fd79a6caa       metrics-server-84c5f94fbc-7c9jd
	b5da218876a65       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             3 minutes ago       Running             minikube-ingress-dns      0                   9494341bee1d4       kube-ingress-dns-minikube
	83f0bb1d840b3       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   477489465d8e4       amd-gpu-device-plugin-g59mv
	937a061c836e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   eac6eb82fe6b9       storage-provisioner
	f21710ba22a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   8a7c42c912620       coredns-7c65d6cfc9-hwwcg
	f6744bb877dd8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   ce2cee411f82e       kube-proxy-s6v2l
	9519febde6ea5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   1d53b2c4b0afd       etcd-addons-746456
	e248e3297d1fd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             4 minutes ago       Running             kube-scheduler            0                   1229cf81ec6fe       kube-scheduler-addons-746456
	b37d172943b0d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             4 minutes ago       Running             kube-controller-manager   0                   2c9e0009ce343       kube-controller-manager-addons-746456
	a207867015660       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             4 minutes ago       Running             kube-apiserver            0                   d307e637dbda5       kube-apiserver-addons-746456
	
	
	==> coredns [f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737466634f337b3fa3] <==
	[INFO] 10.244.0.9:58695 - 40016 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000598802s
	[INFO] 10.244.0.9:58695 - 58676 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000259884s
	[INFO] 10.244.0.9:58695 - 29326 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000360839s
	[INFO] 10.244.0.9:58695 - 11996 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000177767s
	[INFO] 10.244.0.9:58695 - 52404 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000183357s
	[INFO] 10.244.0.9:58695 - 48341 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000367085s
	[INFO] 10.244.0.9:58695 - 21397 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000365562s
	[INFO] 10.244.0.9:49861 - 36387 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000309666s
	[INFO] 10.244.0.9:49861 - 36751 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000128948s
	[INFO] 10.244.0.9:34839 - 27236 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011201s
	[INFO] 10.244.0.9:34839 - 27584 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000408683s
	[INFO] 10.244.0.9:48711 - 48526 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010482s
	[INFO] 10.244.0.9:48711 - 48223 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000229649s
	[INFO] 10.244.0.9:56975 - 2734 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186636s
	[INFO] 10.244.0.9:56975 - 3007 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000345557s
	[INFO] 10.244.0.23:48888 - 46086 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000353454s
	[INFO] 10.244.0.23:45011 - 54839 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154064s
	[INFO] 10.244.0.23:45135 - 59659 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164412s
	[INFO] 10.244.0.23:45771 - 18086 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149455s
	[INFO] 10.244.0.23:34513 - 9995 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163501s
	[INFO] 10.244.0.23:56168 - 52977 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111895s
	[INFO] 10.244.0.23:42304 - 38923 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00132138s
	[INFO] 10.244.0.23:46450 - 35371 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001236283s
	[INFO] 10.244.0.26:32803 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000296636s
	[INFO] 10.244.0.26:34514 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000300551s
	
	
	==> describe nodes <==
	Name:               addons-746456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-746456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=addons-746456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T10_38_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-746456
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:38:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-746456
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:42:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:40:51 +0000   Mon, 04 Nov 2024 10:38:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:40:51 +0000   Mon, 04 Nov 2024 10:38:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:40:51 +0000   Mon, 04 Nov 2024 10:38:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:40:51 +0000   Mon, 04 Nov 2024 10:38:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    addons-746456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 824bf4a601f5426ab4bb582ae703a9d2
	  System UUID:                824bf4a6-01f5-426a-b4bb-582ae703a9d2
	  Boot ID:                    43a61baf-811b-45a7-8f72-715fdd200ed5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     hello-world-app-55bf9c44b4-ldhdr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-x2rwk    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m15s
	  kube-system                 amd-gpu-device-plugin-g59mv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 coredns-7c65d6cfc9-hwwcg                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m23s
	  kube-system                 etcd-addons-746456                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m28s
	  kube-system                 kube-apiserver-addons-746456                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-controller-manager-addons-746456        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-s6v2l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-addons-746456                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 metrics-server-84c5f94fbc-7c9jd              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m19s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-746456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-746456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x7 over 4m34s)  kubelet          Node addons-746456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m28s                  kubelet          Node addons-746456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s                  kubelet          Node addons-746456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s                  kubelet          Node addons-746456 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m28s                  kubelet          Node addons-746456 status is now: NodeReady
	  Normal  RegisteredNode           4m24s                  node-controller  Node addons-746456 event: Registered Node addons-746456 in Controller
	
	
	==> dmesg <==
	[  +5.017993] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.030257] kauditd_printk_skb: 153 callbacks suppressed
	[ +10.298373] kauditd_printk_skb: 64 callbacks suppressed
	[Nov 4 10:39] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.598826] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.667129] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.088108] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.708059] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.467297] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.748897] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.303890] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.582707] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.757078] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.785488] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 4 10:40] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.019674] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.129459] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.001996] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.692326] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.310848] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.316145] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.644215] kauditd_printk_skb: 34 callbacks suppressed
	[Nov 4 10:41] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.950723] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 4 10:42] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0] <==
	{"level":"info","ts":"2024-11-04T10:39:20.626626Z","caller":"traceutil/trace.go:171","msg":"trace[751242300] linearizableReadLoop","detail":"{readStateIndex:1050; appliedIndex:1050; }","duration":"277.124131ms","start":"2024-11-04T10:39:20.349486Z","end":"2024-11-04T10:39:20.626610Z","steps":["trace[751242300] 'read index received'  (duration: 277.119255ms)","trace[751242300] 'applied index is now lower than readState.Index'  (duration: 4.14µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:39:20.627227Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.867607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:39:20.627267Z","caller":"traceutil/trace.go:171","msg":"trace[1054398833] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1021; }","duration":"269.909743ms","start":"2024-11-04T10:39:20.357348Z","end":"2024-11-04T10:39:20.627258Z","steps":["trace[1054398833] 'agreement among raft nodes before linearized reading'  (duration: 269.856024ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:39:41.214046Z","caller":"traceutil/trace.go:171","msg":"trace[17301424] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"407.959557ms","start":"2024-11-04T10:39:40.806070Z","end":"2024-11-04T10:39:41.214030Z","steps":["trace[17301424] 'process raft request'  (duration: 407.656159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:39:41.216299Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T10:39:40.806057Z","time spent":"408.964021ms","remote":"127.0.0.1:44728","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-11-04T10:40:09.320079Z","caller":"traceutil/trace.go:171","msg":"trace[1466107482] transaction","detail":"{read_only:false; response_revision:1316; number_of_response:1; }","duration":"330.002262ms","start":"2024-11-04T10:40:08.990058Z","end":"2024-11-04T10:40:09.320060Z","steps":["trace[1466107482] 'process raft request'  (duration: 329.825308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:40:09.320243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T10:40:08.990045Z","time spent":"330.123874ms","remote":"127.0.0.1:44834","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1280 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-11-04T10:40:09.320668Z","caller":"traceutil/trace.go:171","msg":"trace[688125957] linearizableReadLoop","detail":"{readStateIndex:1356; appliedIndex:1356; }","duration":"323.106599ms","start":"2024-11-04T10:40:08.997547Z","end":"2024-11-04T10:40:09.320654Z","steps":["trace[688125957] 'read index received'  (duration: 323.101104ms)","trace[688125957] 'applied index is now lower than readState.Index'  (duration: 4.574µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:40:09.320772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.344852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:40:09.320813Z","caller":"traceutil/trace.go:171","msg":"trace[86850128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1316; }","duration":"323.393937ms","start":"2024-11-04T10:40:08.997411Z","end":"2024-11-04T10:40:09.320805Z","steps":["trace[86850128] 'agreement among raft nodes before linearized reading'  (duration: 323.323043ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:40:09.320841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T10:40:08.997371Z","time spent":"323.462652ms","remote":"127.0.0.1:44742","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-11-04T10:40:09.324706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.914071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-11-04T10:40:09.325823Z","caller":"traceutil/trace.go:171","msg":"trace[1506822405] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1317; }","duration":"174.032171ms","start":"2024-11-04T10:40:09.151778Z","end":"2024-11-04T10:40:09.325810Z","steps":["trace[1506822405] 'agreement among raft nodes before linearized reading'  (duration: 172.847378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:40:09.326215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.579621ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:40:09.326305Z","caller":"traceutil/trace.go:171","msg":"trace[495762556] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1317; }","duration":"262.632406ms","start":"2024-11-04T10:40:09.063621Z","end":"2024-11-04T10:40:09.326254Z","steps":["trace[495762556] 'agreement among raft nodes before linearized reading'  (duration: 260.648697ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:24.499339Z","caller":"traceutil/trace.go:171","msg":"trace[960604166] linearizableReadLoop","detail":"{readStateIndex:1465; appliedIndex:1464; }","duration":"123.868211ms","start":"2024-11-04T10:40:24.375458Z","end":"2024-11-04T10:40:24.499326Z","steps":["trace[960604166] 'read index received'  (duration: 123.686319ms)","trace[960604166] 'applied index is now lower than readState.Index'  (duration: 181.502µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:40:24.499534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.059239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:40:24.499579Z","caller":"traceutil/trace.go:171","msg":"trace[2098967026] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1417; }","duration":"124.118529ms","start":"2024-11-04T10:40:24.375453Z","end":"2024-11-04T10:40:24.499572Z","steps":["trace[2098967026] 'agreement among raft nodes before linearized reading'  (duration: 124.040367ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:24.499625Z","caller":"traceutil/trace.go:171","msg":"trace[1274175943] transaction","detail":"{read_only:false; response_revision:1417; number_of_response:1; }","duration":"149.228373ms","start":"2024-11-04T10:40:24.350384Z","end":"2024-11-04T10:40:24.499612Z","steps":["trace[1274175943] 'process raft request'  (duration: 148.82834ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:37.692221Z","caller":"traceutil/trace.go:171","msg":"trace[887757847] transaction","detail":"{read_only:false; response_revision:1493; number_of_response:1; }","duration":"162.406133ms","start":"2024-11-04T10:40:37.529799Z","end":"2024-11-04T10:40:37.692205Z","steps":["trace[887757847] 'process raft request'  (duration: 162.251205ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:37.766262Z","caller":"traceutil/trace.go:171","msg":"trace[1915448977] transaction","detail":"{read_only:false; response_revision:1494; number_of_response:1; }","duration":"126.900823ms","start":"2024-11-04T10:40:37.639346Z","end":"2024-11-04T10:40:37.766247Z","steps":["trace[1915448977] 'process raft request'  (duration: 120.863274ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:41:04.121740Z","caller":"traceutil/trace.go:171","msg":"trace[1620520275] linearizableReadLoop","detail":"{readStateIndex:1813; appliedIndex:1812; }","duration":"139.992221ms","start":"2024-11-04T10:41:03.981724Z","end":"2024-11-04T10:41:04.121717Z","steps":["trace[1620520275] 'read index received'  (duration: 139.864148ms)","trace[1620520275] 'applied index is now lower than readState.Index'  (duration: 127.334µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:41:04.122110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.367844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-resizer-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:41:04.122146Z","caller":"traceutil/trace.go:171","msg":"trace[1955318676] range","detail":"{range_begin:/registry/clusterroles/external-resizer-runner; range_end:; response_count:0; response_revision:1751; }","duration":"140.41307ms","start":"2024-11-04T10:41:03.981719Z","end":"2024-11-04T10:41:04.122132Z","steps":["trace[1955318676] 'agreement among raft nodes before linearized reading'  (duration: 140.345887ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:41:04.121921Z","caller":"traceutil/trace.go:171","msg":"trace[195758964] transaction","detail":"{read_only:false; response_revision:1751; number_of_response:1; }","duration":"295.707696ms","start":"2024-11-04T10:41:03.826201Z","end":"2024-11-04T10:41:04.121909Z","steps":["trace[195758964] 'process raft request'  (duration: 295.419514ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:42:45 up 5 min,  0 users,  load average: 0.48, 0.80, 0.43
	Linux addons-746456 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25] <==
	E1104 10:39:57.198734       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.199.111:443: connect: connection refused" logger="UnhandledError"
	E1104 10:39:57.203177       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.199.111:443: connect: connection refused" logger="UnhandledError"
	E1104 10:39:57.207996       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.199.111:443: connect: connection refused" logger="UnhandledError"
	I1104 10:39:57.274726       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1104 10:40:04.752617       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.63.69"}
	I1104 10:40:21.924570       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1104 10:40:22.108458       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.199.171"}
	I1104 10:40:26.843205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1104 10:40:27.970112       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1104 10:40:44.357954       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1104 10:40:59.877756       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.877792       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:40:59.904246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.904279       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:40:59.926706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.926762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:40:59.966887       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.966992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:41:00.019514       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:41:00.019651       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1104 10:41:00.967256       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1104 10:41:01.020562       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1104 10:41:01.063092       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E1104 10:41:04.758097       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1104 10:42:44.266505       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.226.196"}
	
	
	==> kube-controller-manager [b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01] <==
	W1104 10:41:21.584769       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:41:21.584885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1104 10:41:21.835841       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1104 10:41:21.835892       1 shared_informer.go:320] Caches are synced for garbage collector
	I1104 10:41:23.702317       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1104 10:41:31.689586       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="4.247µs"
	I1104 10:41:36.989848       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W1104 10:41:37.975962       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:41:37.976096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:41:42.505701       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:41:42.505809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:41:43.928410       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:41:43.928565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:41:53.665025       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:41:53.665073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:42:12.654058       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:42:12.654210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:42:14.790345       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:42:14.790513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:42:25.096976       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:42:25.097115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1104 10:42:44.108463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.040203ms"
	I1104 10:42:44.134713       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="26.035113ms"
	I1104 10:42:44.134927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="166.521µs"
	I1104 10:42:44.135119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.484µs"
	
	
	==> kube-proxy [f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 10:38:24.021958       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 10:38:24.033132       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E1104 10:38:24.033192       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 10:38:24.123673       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 10:38:24.123744       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 10:38:24.123777       1 server_linux.go:169] "Using iptables Proxier"
	I1104 10:38:24.132353       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 10:38:24.132636       1 server.go:483] "Version info" version="v1.31.2"
	I1104 10:38:24.132662       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 10:38:24.136027       1 config.go:199] "Starting service config controller"
	I1104 10:38:24.136064       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 10:38:24.136116       1 config.go:105] "Starting endpoint slice config controller"
	I1104 10:38:24.136121       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 10:38:24.143300       1 config.go:328] "Starting node config controller"
	I1104 10:38:24.143335       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 10:38:24.237060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 10:38:24.237138       1 shared_informer.go:320] Caches are synced for service config
	I1104 10:38:24.244934       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5] <==
	W1104 10:38:15.180272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.180316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.195094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1104 10:38:15.195211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.229202       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1104 10:38:15.229247       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1104 10:38:15.269472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1104 10:38:15.269508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.298741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1104 10:38:15.298920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.381712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1104 10:38:15.381760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.391842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.391950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.402986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.403119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.429914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.430050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.495323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1104 10:38:15.495450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.562639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1104 10:38:15.562734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.603526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1104 10:38:15.603571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1104 10:38:17.752620       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098382    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="hostpath"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098415    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2de93991-ff75-4ba5-814e-4fbe32bd9b24" containerName="nvidia-device-plugin-ctr"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098492    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="639b74bc-1563-4f77-a0bd-0e15cd9a35da" containerName="local-path-provisioner"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098523    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9923c3c7-3ae3-4254-a6cf-5a747b90f240" containerName="cloud-spanner-emulator"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098554    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="node-driver-registrar"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098584    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="liveness-probe"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098615    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca745810-fc5d-4ea0-99a5-5e2abe634e9a" containerName="task-pv-container"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098646    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7a256af-f053-46a8-99b2-44e43137ec86" containerName="csi-resizer"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: E1104 10:42:44.098676    1190 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="csi-snapshotter"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098748    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba9d3ac-9e13-4702-af54-df0b53064a49" containerName="csi-attacher"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098786    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7db880-cb20-42ce-9854-c64a11ee5a9c" containerName="volume-snapshot-controller"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098816    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae86336-7bdb-4245-895c-34b46444de04" containerName="volume-snapshot-controller"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098846    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="csi-snapshotter"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098877    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="hostpath"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098906    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de93991-ff75-4ba5-814e-4fbe32bd9b24" containerName="nvidia-device-plugin-ctr"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098936    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="9923c3c7-3ae3-4254-a6cf-5a747b90f240" containerName="cloud-spanner-emulator"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.098982    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="639b74bc-1563-4f77-a0bd-0e15cd9a35da" containerName="local-path-provisioner"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.099012    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="csi-provisioner"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.099042    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a256af-f053-46a8-99b2-44e43137ec86" containerName="csi-resizer"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.099077    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca745810-fc5d-4ea0-99a5-5e2abe634e9a" containerName="task-pv-container"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.099106    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ac7507-cc18-43fb-b54b-82f4de9ba4a8" containerName="yakd"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.099139    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="node-driver-registrar"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.099168    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="csi-external-health-monitor-controller"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.099199    1190 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cc4546-427d-4949-9fc9-3e6dac0b0fd8" containerName="liveness-probe"
	Nov 04 10:42:44 addons-746456 kubelet[1190]: I1104 10:42:44.185366    1190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4565\" (UniqueName: \"kubernetes.io/projected/47d471ae-ef33-496e-9841-7d205c707c80-kube-api-access-p4565\") pod \"hello-world-app-55bf9c44b4-ldhdr\" (UID: \"47d471ae-ef33-496e-9841-7d205c707c80\") " pod="default/hello-world-app-55bf9c44b4-ldhdr"
	
	
	==> storage-provisioner [937a061c836e4f55dcbe4ded8cfc61ace0b16d090889344de6647c05a5621b3c] <==
	I1104 10:38:28.574713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 10:38:28.590316       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 10:38:28.590379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 10:38:28.610704       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 10:38:28.611513       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-746456_38e99173-d61b-4158-83c8-1b141f1705e4!
	I1104 10:38:28.612014       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"288eb3de-b2e3-4aa2-a502-19d22fabbb8b", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-746456_38e99173-d61b-4158-83c8-1b141f1705e4 became leader
	I1104 10:38:28.711788       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-746456_38e99173-d61b-4158-83c8-1b141f1705e4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-746456 -n addons-746456
helpers_test.go:261: (dbg) Run:  kubectl --context addons-746456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-ldhdr ingress-nginx-admission-create-d5k74 ingress-nginx-admission-patch-mpkx9
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-746456 describe pod hello-world-app-55bf9c44b4-ldhdr ingress-nginx-admission-create-d5k74 ingress-nginx-admission-patch-mpkx9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-746456 describe pod hello-world-app-55bf9c44b4-ldhdr ingress-nginx-admission-create-d5k74 ingress-nginx-admission-patch-mpkx9: exit status 1 (80.763701ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-ldhdr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-746456/192.168.39.4
	Start Time:       Mon, 04 Nov 2024 10:42:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4565 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p4565:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-ldhdr to addons-746456
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d5k74" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mpkx9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-746456 describe pod hello-world-app-55bf9c44b4-ldhdr ingress-nginx-admission-create-d5k74 ingress-nginx-admission-patch-mpkx9: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 addons disable ingress-dns --alsologtostderr -v=1: (1.212459321s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 addons disable ingress --alsologtostderr -v=1: (7.705706906s)
--- FAIL: TestAddons/parallel/Ingress (153.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (346.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.060485ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I1104 10:40:03.997723   27218 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1104 10:40:03.997745   27218 kapi.go:107] duration metric: took 7.10995ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7c9jd" [c431d0a4-e34e-4f14-a95d-3223d4486d7c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010068765s
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (437.702925ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
I1104 10:40:09.443621   27218 retry.go:31] will retry after 3.098488s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (61.78128ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
I1104 10:40:12.604407   27218 retry.go:31] will retry after 4.823755s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (83.158121ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-746456, age: 2m0.510473667s

                                                
                                                
** /stderr **
I1104 10:40:17.512530   27218 retry.go:31] will retry after 8.252843858s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (63.0539ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 2m1.827430416s

                                                
                                                
** /stderr **
I1104 10:40:25.829108   27218 retry.go:31] will retry after 12.855284419s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (62.775367ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 2m14.746779288s

                                                
                                                
** /stderr **
I1104 10:40:38.748417   27218 retry.go:31] will retry after 14.979305303s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (70.186611ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 2m29.796736826s

                                                
                                                
** /stderr **
I1104 10:40:53.798632   27218 retry.go:31] will retry after 12.141458318s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (64.016322ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 2m42.006990281s

                                                
                                                
** /stderr **
I1104 10:41:06.008644   27218 retry.go:31] will retry after 29.496277069s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (62.03051ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 3m11.568871519s

                                                
                                                
** /stderr **
I1104 10:41:35.570615   27218 retry.go:31] will retry after 28.541188722s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (60.000503ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 3m40.17226155s

                                                
                                                
** /stderr **
I1104 10:42:04.174035   27218 retry.go:31] will retry after 56.470144001s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (65.44735ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 4m36.709808411s

                                                
                                                
** /stderr **
I1104 10:43:00.711734   27218 retry.go:31] will retry after 35.840094533s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (61.502237ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 5m12.616019457s

                                                
                                                
** /stderr **
I1104 10:43:36.618147   27218 retry.go:31] will retry after 30.89378131s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (62.673149ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 5m43.573987348s

                                                
                                                
** /stderr **
I1104 10:44:07.575903   27218 retry.go:31] will retry after 1m0.999104632s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (59.99602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 6m44.633973439s

                                                
                                                
** /stderr **
I1104 10:45:08.636138   27218 retry.go:31] will retry after 39.091129745s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-746456 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-746456 top pods -n kube-system: exit status 1 (65.861676ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-g59mv, age: 7m23.792084891s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-746456 -n addons-746456
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 logs -n 25: (1.089111316s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-779038                                                                     | download-only-779038 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| delete  | -p download-only-440707                                                                     | download-only-440707 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-739738 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC |                     |
	|         | binary-mirror-739738                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45149                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-739738                                                                     | binary-mirror-739738 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| addons  | enable dashboard -p                                                                         | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC |                     |
	|         | addons-746456                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC |                     |
	|         | addons-746456                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-746456 --wait=true                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:39 UTC | 04 Nov 24 10:39 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:39 UTC | 04 Nov 24 10:40 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | -p addons-746456                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-746456 ip                                                                            | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-746456 ssh curl -s                                                                   | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-746456 ssh cat                                                                       | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:40 UTC |
	|         | /opt/local-path-provisioner/pvc-805b188f-c328-4e68-8920-c8c6b1f9c108_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:40 UTC | 04 Nov 24 10:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-746456 addons                                                                        | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:41 UTC | 04 Nov 24 10:41 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-746456 ip                                                                            | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:42 UTC | 04 Nov 24 10:42 UTC |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:42 UTC | 04 Nov 24 10:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-746456 addons disable                                                                | addons-746456        | jenkins | v1.34.0 | 04 Nov 24 10:42 UTC | 04 Nov 24 10:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:37:39
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:37:39.385347   27967 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:37:39.385445   27967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:37:39.385453   27967 out.go:358] Setting ErrFile to fd 2...
	I1104 10:37:39.385457   27967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:37:39.385619   27967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:37:39.386172   27967 out.go:352] Setting JSON to false
	I1104 10:37:39.387012   27967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4810,"bootTime":1730711849,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:37:39.387076   27967 start.go:139] virtualization: kvm guest
	I1104 10:37:39.390070   27967 out.go:177] * [addons-746456] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:37:39.391440   27967 notify.go:220] Checking for updates...
	I1104 10:37:39.391455   27967 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:37:39.392960   27967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:37:39.394322   27967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:37:39.395646   27967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:37:39.396925   27967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:37:39.398215   27967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:37:39.399788   27967 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:37:39.432037   27967 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 10:37:39.433452   27967 start.go:297] selected driver: kvm2
	I1104 10:37:39.433469   27967 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:37:39.433481   27967 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:37:39.434265   27967 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:37:39.434342   27967 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:37:39.450411   27967 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:37:39.450453   27967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:37:39.450652   27967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:37:39.450677   27967 cni.go:84] Creating CNI manager for ""
	I1104 10:37:39.450701   27967 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:37:39.450709   27967 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 10:37:39.450768   27967 start.go:340] cluster config:
	{Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:37:39.450853   27967 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:37:39.452886   27967 out.go:177] * Starting "addons-746456" primary control-plane node in "addons-746456" cluster
	I1104 10:37:39.454154   27967 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:37:39.454180   27967 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:37:39.454186   27967 cache.go:56] Caching tarball of preloaded images
	I1104 10:37:39.454265   27967 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:37:39.454278   27967 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:37:39.454553   27967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/config.json ...
	I1104 10:37:39.454573   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/config.json: {Name:mk7f355297e64314e7f2737f1ad3b6060652fcdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:37:39.454711   27967 start.go:360] acquireMachinesLock for addons-746456: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:37:39.454766   27967 start.go:364] duration metric: took 39.347µs to acquireMachinesLock for "addons-746456"
	I1104 10:37:39.454789   27967 start.go:93] Provisioning new machine with config: &{Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:37:39.454840   27967 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 10:37:39.456617   27967 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1104 10:37:39.456722   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:37:39.456759   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:37:39.470888   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1104 10:37:39.471452   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:37:39.471973   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:37:39.471993   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:37:39.472385   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:37:39.472550   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:37:39.472704   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:37:39.472856   27967 start.go:159] libmachine.API.Create for "addons-746456" (driver="kvm2")
	I1104 10:37:39.472900   27967 client.go:168] LocalClient.Create starting
	I1104 10:37:39.472948   27967 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:37:39.699203   27967 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:37:39.841913   27967 main.go:141] libmachine: Running pre-create checks...
	I1104 10:37:39.841935   27967 main.go:141] libmachine: (addons-746456) Calling .PreCreateCheck
	I1104 10:37:39.842401   27967 main.go:141] libmachine: (addons-746456) Calling .GetConfigRaw
	I1104 10:37:39.842807   27967 main.go:141] libmachine: Creating machine...
	I1104 10:37:39.842820   27967 main.go:141] libmachine: (addons-746456) Calling .Create
	I1104 10:37:39.842973   27967 main.go:141] libmachine: (addons-746456) Creating KVM machine...
	I1104 10:37:39.844192   27967 main.go:141] libmachine: (addons-746456) DBG | found existing default KVM network
	I1104 10:37:39.844900   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:39.844770   27989 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I1104 10:37:39.844938   27967 main.go:141] libmachine: (addons-746456) DBG | created network xml: 
	I1104 10:37:39.844955   27967 main.go:141] libmachine: (addons-746456) DBG | <network>
	I1104 10:37:39.844965   27967 main.go:141] libmachine: (addons-746456) DBG |   <name>mk-addons-746456</name>
	I1104 10:37:39.844973   27967 main.go:141] libmachine: (addons-746456) DBG |   <dns enable='no'/>
	I1104 10:37:39.844981   27967 main.go:141] libmachine: (addons-746456) DBG |   
	I1104 10:37:39.844990   27967 main.go:141] libmachine: (addons-746456) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1104 10:37:39.845013   27967 main.go:141] libmachine: (addons-746456) DBG |     <dhcp>
	I1104 10:37:39.845029   27967 main.go:141] libmachine: (addons-746456) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1104 10:37:39.845036   27967 main.go:141] libmachine: (addons-746456) DBG |     </dhcp>
	I1104 10:37:39.845041   27967 main.go:141] libmachine: (addons-746456) DBG |   </ip>
	I1104 10:37:39.845047   27967 main.go:141] libmachine: (addons-746456) DBG |   
	I1104 10:37:39.845060   27967 main.go:141] libmachine: (addons-746456) DBG | </network>
	I1104 10:37:39.845068   27967 main.go:141] libmachine: (addons-746456) DBG | 
	I1104 10:37:39.850312   27967 main.go:141] libmachine: (addons-746456) DBG | trying to create private KVM network mk-addons-746456 192.168.39.0/24...
	I1104 10:37:39.908997   27967 main.go:141] libmachine: (addons-746456) DBG | private KVM network mk-addons-746456 192.168.39.0/24 created
	I1104 10:37:39.909028   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:39.908954   27989 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:37:39.909044   27967 main.go:141] libmachine: (addons-746456) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456 ...
	I1104 10:37:39.909061   27967 main.go:141] libmachine: (addons-746456) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:37:39.909077   27967 main.go:141] libmachine: (addons-746456) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:37:40.160338   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:40.160211   27989 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa...
	I1104 10:37:40.355708   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:40.355570   27989 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/addons-746456.rawdisk...
	I1104 10:37:40.355736   27967 main.go:141] libmachine: (addons-746456) DBG | Writing magic tar header
	I1104 10:37:40.355747   27967 main.go:141] libmachine: (addons-746456) DBG | Writing SSH key tar header
	I1104 10:37:40.355754   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:40.355693   27989 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456 ...
	I1104 10:37:40.355867   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456
	I1104 10:37:40.355889   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:37:40.355901   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456 (perms=drwx------)
	I1104 10:37:40.355911   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:37:40.355921   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:37:40.355931   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:37:40.355947   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:37:40.355955   27967 main.go:141] libmachine: (addons-746456) DBG | Checking permissions on dir: /home
	I1104 10:37:40.355968   27967 main.go:141] libmachine: (addons-746456) DBG | Skipping /home - not owner
	I1104 10:37:40.355985   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:37:40.356001   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:37:40.356015   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:37:40.356029   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:37:40.356041   27967 main.go:141] libmachine: (addons-746456) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:37:40.356057   27967 main.go:141] libmachine: (addons-746456) Creating domain...
	I1104 10:37:40.357036   27967 main.go:141] libmachine: (addons-746456) define libvirt domain using xml: 
	I1104 10:37:40.357063   27967 main.go:141] libmachine: (addons-746456) <domain type='kvm'>
	I1104 10:37:40.357074   27967 main.go:141] libmachine: (addons-746456)   <name>addons-746456</name>
	I1104 10:37:40.357086   27967 main.go:141] libmachine: (addons-746456)   <memory unit='MiB'>4000</memory>
	I1104 10:37:40.357096   27967 main.go:141] libmachine: (addons-746456)   <vcpu>2</vcpu>
	I1104 10:37:40.357103   27967 main.go:141] libmachine: (addons-746456)   <features>
	I1104 10:37:40.357112   27967 main.go:141] libmachine: (addons-746456)     <acpi/>
	I1104 10:37:40.357121   27967 main.go:141] libmachine: (addons-746456)     <apic/>
	I1104 10:37:40.357133   27967 main.go:141] libmachine: (addons-746456)     <pae/>
	I1104 10:37:40.357142   27967 main.go:141] libmachine: (addons-746456)     
	I1104 10:37:40.357151   27967 main.go:141] libmachine: (addons-746456)   </features>
	I1104 10:37:40.357161   27967 main.go:141] libmachine: (addons-746456)   <cpu mode='host-passthrough'>
	I1104 10:37:40.357169   27967 main.go:141] libmachine: (addons-746456)   
	I1104 10:37:40.357181   27967 main.go:141] libmachine: (addons-746456)   </cpu>
	I1104 10:37:40.357189   27967 main.go:141] libmachine: (addons-746456)   <os>
	I1104 10:37:40.357196   27967 main.go:141] libmachine: (addons-746456)     <type>hvm</type>
	I1104 10:37:40.357204   27967 main.go:141] libmachine: (addons-746456)     <boot dev='cdrom'/>
	I1104 10:37:40.357214   27967 main.go:141] libmachine: (addons-746456)     <boot dev='hd'/>
	I1104 10:37:40.357221   27967 main.go:141] libmachine: (addons-746456)     <bootmenu enable='no'/>
	I1104 10:37:40.357245   27967 main.go:141] libmachine: (addons-746456)   </os>
	I1104 10:37:40.357271   27967 main.go:141] libmachine: (addons-746456)   <devices>
	I1104 10:37:40.357294   27967 main.go:141] libmachine: (addons-746456)     <disk type='file' device='cdrom'>
	I1104 10:37:40.357312   27967 main.go:141] libmachine: (addons-746456)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/boot2docker.iso'/>
	I1104 10:37:40.357319   27967 main.go:141] libmachine: (addons-746456)       <target dev='hdc' bus='scsi'/>
	I1104 10:37:40.357326   27967 main.go:141] libmachine: (addons-746456)       <readonly/>
	I1104 10:37:40.357332   27967 main.go:141] libmachine: (addons-746456)     </disk>
	I1104 10:37:40.357340   27967 main.go:141] libmachine: (addons-746456)     <disk type='file' device='disk'>
	I1104 10:37:40.357349   27967 main.go:141] libmachine: (addons-746456)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:37:40.357359   27967 main.go:141] libmachine: (addons-746456)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/addons-746456.rawdisk'/>
	I1104 10:37:40.357364   27967 main.go:141] libmachine: (addons-746456)       <target dev='hda' bus='virtio'/>
	I1104 10:37:40.357405   27967 main.go:141] libmachine: (addons-746456)     </disk>
	I1104 10:37:40.357435   27967 main.go:141] libmachine: (addons-746456)     <interface type='network'>
	I1104 10:37:40.357449   27967 main.go:141] libmachine: (addons-746456)       <source network='mk-addons-746456'/>
	I1104 10:37:40.357460   27967 main.go:141] libmachine: (addons-746456)       <model type='virtio'/>
	I1104 10:37:40.357469   27967 main.go:141] libmachine: (addons-746456)     </interface>
	I1104 10:37:40.357481   27967 main.go:141] libmachine: (addons-746456)     <interface type='network'>
	I1104 10:37:40.357494   27967 main.go:141] libmachine: (addons-746456)       <source network='default'/>
	I1104 10:37:40.357505   27967 main.go:141] libmachine: (addons-746456)       <model type='virtio'/>
	I1104 10:37:40.357518   27967 main.go:141] libmachine: (addons-746456)     </interface>
	I1104 10:37:40.357528   27967 main.go:141] libmachine: (addons-746456)     <serial type='pty'>
	I1104 10:37:40.357538   27967 main.go:141] libmachine: (addons-746456)       <target port='0'/>
	I1104 10:37:40.357548   27967 main.go:141] libmachine: (addons-746456)     </serial>
	I1104 10:37:40.357567   27967 main.go:141] libmachine: (addons-746456)     <console type='pty'>
	I1104 10:37:40.357592   27967 main.go:141] libmachine: (addons-746456)       <target type='serial' port='0'/>
	I1104 10:37:40.357603   27967 main.go:141] libmachine: (addons-746456)     </console>
	I1104 10:37:40.357611   27967 main.go:141] libmachine: (addons-746456)     <rng model='virtio'>
	I1104 10:37:40.357626   27967 main.go:141] libmachine: (addons-746456)       <backend model='random'>/dev/random</backend>
	I1104 10:37:40.357634   27967 main.go:141] libmachine: (addons-746456)     </rng>
	I1104 10:37:40.357642   27967 main.go:141] libmachine: (addons-746456)     
	I1104 10:37:40.357647   27967 main.go:141] libmachine: (addons-746456)     
	I1104 10:37:40.357658   27967 main.go:141] libmachine: (addons-746456)   </devices>
	I1104 10:37:40.357671   27967 main.go:141] libmachine: (addons-746456) </domain>
	I1104 10:37:40.357683   27967 main.go:141] libmachine: (addons-746456) 
	I1104 10:37:40.363082   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:c6:16:0c in network default
	I1104 10:37:40.363613   27967 main.go:141] libmachine: (addons-746456) Ensuring networks are active...
	I1104 10:37:40.363629   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:40.364265   27967 main.go:141] libmachine: (addons-746456) Ensuring network default is active
	I1104 10:37:40.364622   27967 main.go:141] libmachine: (addons-746456) Ensuring network mk-addons-746456 is active
	I1104 10:37:40.365094   27967 main.go:141] libmachine: (addons-746456) Getting domain xml...
	I1104 10:37:40.365658   27967 main.go:141] libmachine: (addons-746456) Creating domain...
	I1104 10:37:41.736908   27967 main.go:141] libmachine: (addons-746456) Waiting to get IP...
	I1104 10:37:41.737735   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:41.738240   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:41.738274   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:41.738228   27989 retry.go:31] will retry after 233.791989ms: waiting for machine to come up
	I1104 10:37:41.973803   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:41.974186   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:41.974213   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:41.974140   27989 retry.go:31] will retry after 264.314556ms: waiting for machine to come up
	I1104 10:37:42.239425   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:42.239771   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:42.239793   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:42.239722   27989 retry.go:31] will retry after 439.256751ms: waiting for machine to come up
	I1104 10:37:42.680467   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:42.680862   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:42.680881   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:42.680824   27989 retry.go:31] will retry after 587.081953ms: waiting for machine to come up
	I1104 10:37:43.269423   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:43.269899   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:43.269926   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:43.269869   27989 retry.go:31] will retry after 569.474968ms: waiting for machine to come up
	I1104 10:37:43.840617   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:43.841057   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:43.841085   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:43.841009   27989 retry.go:31] will retry after 870.179807ms: waiting for machine to come up
	I1104 10:37:44.712711   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:44.713106   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:44.713144   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:44.713077   27989 retry.go:31] will retry after 776.282678ms: waiting for machine to come up
	I1104 10:37:45.490992   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:45.491335   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:45.491363   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:45.491298   27989 retry.go:31] will retry after 1.478494454s: waiting for machine to come up
	I1104 10:37:46.971872   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:46.972283   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:46.972310   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:46.972242   27989 retry.go:31] will retry after 1.61669354s: waiting for machine to come up
	I1104 10:37:48.590204   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:48.590636   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:48.590662   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:48.590606   27989 retry.go:31] will retry after 1.896747776s: waiting for machine to come up
	I1104 10:37:50.488679   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:50.489117   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:50.489145   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:50.489078   27989 retry.go:31] will retry after 2.7039374s: waiting for machine to come up
	I1104 10:37:53.194165   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:53.194620   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:53.194642   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:53.194576   27989 retry.go:31] will retry after 3.066417746s: waiting for machine to come up
	I1104 10:37:56.263682   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:37:56.264117   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find current IP address of domain addons-746456 in network mk-addons-746456
	I1104 10:37:56.264143   27967 main.go:141] libmachine: (addons-746456) DBG | I1104 10:37:56.264078   27989 retry.go:31] will retry after 3.836132986s: waiting for machine to come up
	I1104 10:38:00.101792   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.102142   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has current primary IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.102188   27967 main.go:141] libmachine: (addons-746456) Found IP for machine: 192.168.39.4
	I1104 10:38:00.102205   27967 main.go:141] libmachine: (addons-746456) Reserving static IP address...
	I1104 10:38:00.102545   27967 main.go:141] libmachine: (addons-746456) DBG | unable to find host DHCP lease matching {name: "addons-746456", mac: "52:54:00:a0:d7:13", ip: "192.168.39.4"} in network mk-addons-746456
	I1104 10:38:00.170807   27967 main.go:141] libmachine: (addons-746456) DBG | Getting to WaitForSSH function...
	I1104 10:38:00.170837   27967 main.go:141] libmachine: (addons-746456) Reserved static IP address: 192.168.39.4
	I1104 10:38:00.170850   27967 main.go:141] libmachine: (addons-746456) Waiting for SSH to be available...
	I1104 10:38:00.173084   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.173495   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.173523   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.173668   27967 main.go:141] libmachine: (addons-746456) DBG | Using SSH client type: external
	I1104 10:38:00.173694   27967 main.go:141] libmachine: (addons-746456) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa (-rw-------)
	I1104 10:38:00.173726   27967 main.go:141] libmachine: (addons-746456) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:38:00.173743   27967 main.go:141] libmachine: (addons-746456) DBG | About to run SSH command:
	I1104 10:38:00.173756   27967 main.go:141] libmachine: (addons-746456) DBG | exit 0
	I1104 10:38:00.301291   27967 main.go:141] libmachine: (addons-746456) DBG | SSH cmd err, output: <nil>: 
	I1104 10:38:00.301594   27967 main.go:141] libmachine: (addons-746456) KVM machine creation complete!
	I1104 10:38:00.301915   27967 main.go:141] libmachine: (addons-746456) Calling .GetConfigRaw
	I1104 10:38:00.309061   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:00.309331   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:00.309504   27967 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:38:00.309520   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:00.310864   27967 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:38:00.310877   27967 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:38:00.310882   27967 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:38:00.310887   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.313254   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.313678   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.313701   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.313849   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.313994   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.314118   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.314214   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.314360   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.314540   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.314552   27967 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:38:00.424313   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:38:00.424335   27967 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:38:00.424345   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.426998   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.427330   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.427357   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.427572   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.427782   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.427985   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.428113   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.428290   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.428455   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.428466   27967 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:38:00.537913   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:38:00.538003   27967 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:38:00.538020   27967 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:38:00.538032   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:38:00.538296   27967 buildroot.go:166] provisioning hostname "addons-746456"
	I1104 10:38:00.538320   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:38:00.538519   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.541142   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.541538   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.541564   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.541744   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.541923   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.542061   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.542190   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.542349   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.542511   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.542524   27967 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-746456 && echo "addons-746456" | sudo tee /etc/hostname
	I1104 10:38:00.665906   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-746456
	
	I1104 10:38:00.665937   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.668558   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.668858   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.668892   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.669014   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.669182   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.669352   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.669497   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.669659   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:00.669810   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:00.669826   27967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-746456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-746456/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-746456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:38:00.789259   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:38:00.789290   27967 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:38:00.789330   27967 buildroot.go:174] setting up certificates
	I1104 10:38:00.789348   27967 provision.go:84] configureAuth start
	I1104 10:38:00.789361   27967 main.go:141] libmachine: (addons-746456) Calling .GetMachineName
	I1104 10:38:00.789622   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:00.792365   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.792728   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.792755   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.792970   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.795459   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.795802   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.795827   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.795978   27967 provision.go:143] copyHostCerts
	I1104 10:38:00.796062   27967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:38:00.796199   27967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:38:00.796283   27967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:38:00.796388   27967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.addons-746456 san=[127.0.0.1 192.168.39.4 addons-746456 localhost minikube]
	I1104 10:38:00.877715   27967 provision.go:177] copyRemoteCerts
	I1104 10:38:00.877766   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:38:00.877790   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:00.880401   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.880765   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:00.880793   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:00.880952   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:00.881094   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:00.881270   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:00.881385   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:00.966856   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:38:00.989191   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:38:01.011071   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:38:01.033496   27967 provision.go:87] duration metric: took 244.13703ms to configureAuth
	I1104 10:38:01.033525   27967 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:38:01.033705   27967 config.go:182] Loaded profile config "addons-746456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:38:01.033792   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.036396   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.036749   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.036774   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.036943   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.037095   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.037222   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.037360   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.037516   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:01.037666   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:01.037680   27967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:38:01.444556   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:38:01.444581   27967 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:38:01.444589   27967 main.go:141] libmachine: (addons-746456) Calling .GetURL
	I1104 10:38:01.445930   27967 main.go:141] libmachine: (addons-746456) DBG | Using libvirt version 6000000
	I1104 10:38:01.447878   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.448207   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.448237   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.448373   27967 main.go:141] libmachine: Docker is up and running!
	I1104 10:38:01.448387   27967 main.go:141] libmachine: Reticulating splines...
	I1104 10:38:01.448394   27967 client.go:171] duration metric: took 21.975483383s to LocalClient.Create
	I1104 10:38:01.448416   27967 start.go:167] duration metric: took 21.975565515s to libmachine.API.Create "addons-746456"
	I1104 10:38:01.448425   27967 start.go:293] postStartSetup for "addons-746456" (driver="kvm2")
	I1104 10:38:01.448444   27967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:38:01.448459   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.448722   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:38:01.448750   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.450692   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.450971   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.450991   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.451136   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.451290   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.451390   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.451490   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:01.535037   27967 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:38:01.539157   27967 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:38:01.539184   27967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:38:01.539260   27967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:38:01.539289   27967 start.go:296] duration metric: took 90.850997ms for postStartSetup
	I1104 10:38:01.539327   27967 main.go:141] libmachine: (addons-746456) Calling .GetConfigRaw
	I1104 10:38:01.539870   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:01.542539   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.542833   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.542857   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.543087   27967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/config.json ...
	I1104 10:38:01.543252   27967 start.go:128] duration metric: took 22.088404679s to createHost
	I1104 10:38:01.543274   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.545474   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.545712   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.545747   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.545854   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.546025   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.546127   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.546238   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.546374   27967 main.go:141] libmachine: Using SSH client type: native
	I1104 10:38:01.546525   27967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1104 10:38:01.546545   27967 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:38:01.653337   27967 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730716681.627964793
	
	I1104 10:38:01.653364   27967 fix.go:216] guest clock: 1730716681.627964793
	I1104 10:38:01.653374   27967 fix.go:229] Guest: 2024-11-04 10:38:01.627964793 +0000 UTC Remote: 2024-11-04 10:38:01.543264431 +0000 UTC m=+22.193535591 (delta=84.700362ms)
	I1104 10:38:01.653439   27967 fix.go:200] guest clock delta is within tolerance: 84.700362ms
	I1104 10:38:01.653446   27967 start.go:83] releasing machines lock for "addons-746456", held for 22.198667431s
	I1104 10:38:01.653477   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.653741   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:01.656183   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.656615   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.656633   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.656822   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.657265   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.657436   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:01.657529   27967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:38:01.657574   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.657632   27967 ssh_runner.go:195] Run: cat /version.json
	I1104 10:38:01.657657   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:01.659910   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660194   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.660230   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660387   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.660390   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660566   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.660699   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:01.660717   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.660731   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:01.660869   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:01.660865   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:01.661015   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:01.661136   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:01.661324   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:01.737576   27967 ssh_runner.go:195] Run: systemctl --version
	I1104 10:38:01.763213   27967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:38:01.921943   27967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:38:01.927445   27967 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:38:01.927516   27967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:38:01.941997   27967 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:38:01.942023   27967 start.go:495] detecting cgroup driver to use...
	I1104 10:38:01.942090   27967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:38:01.956679   27967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:38:01.969679   27967 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:38:01.969736   27967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:38:01.982626   27967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:38:01.995194   27967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:38:02.112459   27967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:38:02.251760   27967 docker.go:233] disabling docker service ...
	I1104 10:38:02.251838   27967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:38:02.265112   27967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:38:02.277265   27967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:38:02.420894   27967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:38:02.543082   27967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:38:02.556733   27967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:38:02.574799   27967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:38:02.574857   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.584477   27967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:38:02.584546   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.594273   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.603748   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.612996   27967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:38:02.622244   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.631654   27967 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.647004   27967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:38:02.656322   27967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:38:02.664802   27967 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:38:02.664859   27967 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:38:02.675911   27967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:38:02.684891   27967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:38:02.804404   27967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:38:02.886732   27967 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:38:02.886811   27967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:38:02.890976   27967 start.go:563] Will wait 60s for crictl version
	I1104 10:38:02.891042   27967 ssh_runner.go:195] Run: which crictl
	I1104 10:38:02.894408   27967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:38:02.926682   27967 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:38:02.926793   27967 ssh_runner.go:195] Run: crio --version
	I1104 10:38:02.951789   27967 ssh_runner.go:195] Run: crio --version
	I1104 10:38:02.979627   27967 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:38:02.980809   27967 main.go:141] libmachine: (addons-746456) Calling .GetIP
	I1104 10:38:02.984143   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:02.984516   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:02.984546   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:02.984700   27967 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:38:02.988379   27967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:38:02.999730   27967 kubeadm.go:883] updating cluster {Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 10:38:02.999851   27967 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:38:02.999906   27967 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:38:03.028235   27967 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 10:38:03.028296   27967 ssh_runner.go:195] Run: which lz4
	I1104 10:38:03.031786   27967 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 10:38:03.035391   27967 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 10:38:03.035432   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 10:38:04.112774   27967 crio.go:462] duration metric: took 1.081023392s to copy over tarball
	I1104 10:38:04.112837   27967 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 10:38:06.183806   27967 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.070941022s)
	I1104 10:38:06.183836   27967 crio.go:469] duration metric: took 2.07103873s to extract the tarball
	I1104 10:38:06.183846   27967 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 10:38:06.219839   27967 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:38:06.260150   27967 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 10:38:06.260177   27967 cache_images.go:84] Images are preloaded, skipping loading
	I1104 10:38:06.260184   27967 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.2 crio true true} ...
	I1104 10:38:06.260308   27967 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-746456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:38:06.260398   27967 ssh_runner.go:195] Run: crio config
	I1104 10:38:06.304511   27967 cni.go:84] Creating CNI manager for ""
	I1104 10:38:06.304535   27967 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:38:06.304545   27967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 10:38:06.304571   27967 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-746456 NodeName:addons-746456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 10:38:06.304715   27967 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-746456"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 10:38:06.304788   27967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:38:06.314319   27967 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 10:38:06.314382   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 10:38:06.323358   27967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1104 10:38:06.338806   27967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:38:06.353567   27967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1104 10:38:06.368096   27967 ssh_runner.go:195] Run: grep 192.168.39.4	control-plane.minikube.internal$ /etc/hosts
	I1104 10:38:06.371617   27967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:38:06.382524   27967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:38:06.508109   27967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:38:06.524650   27967 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456 for IP: 192.168.39.4
	I1104 10:38:06.524676   27967 certs.go:194] generating shared ca certs ...
	I1104 10:38:06.524696   27967 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.524856   27967 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:38:06.648082   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt ...
	I1104 10:38:06.648110   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt: {Name:mkc60cfcc3a05532b876cd4acbbfca8a1c8c1878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.648268   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key ...
	I1104 10:38:06.648279   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key: {Name:mk3ec4fc3b2268fe8854a1415b7cf1496b552554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.648352   27967 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:38:06.718168   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt ...
	I1104 10:38:06.718198   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt: {Name:mke06fb1e1d2874e54d58c110876e45ff172f549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.718339   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key ...
	I1104 10:38:06.718348   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key: {Name:mk2554b1aa340d8e1073dbc7bb4aee16976c2f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:06.718411   27967 certs.go:256] generating profile certs ...
	I1104 10:38:06.718458   27967 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.key
	I1104 10:38:06.718471   27967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt with IP's: []
	I1104 10:38:07.014113   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt ...
	I1104 10:38:07.014162   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: {Name:mk2dbf6749598cb60b7601bf42ced4198096dc20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.014361   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.key ...
	I1104 10:38:07.014393   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.key: {Name:mkdb90e1f72b7bf0594540208f4780ec280e3769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.014555   27967 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019
	I1104 10:38:07.014598   27967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4]
	I1104 10:38:07.178824   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019 ...
	I1104 10:38:07.178855   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019: {Name:mk9e26a02ded78b5d0e82a92927b64b299da376d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.179038   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019 ...
	I1104 10:38:07.179055   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019: {Name:mk52651da068c7b40180a70d72cffe2b6bf68fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.179161   27967 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt.40dc9019 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt
	I1104 10:38:07.179255   27967 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key.40dc9019 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key
	I1104 10:38:07.179305   27967 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key
	I1104 10:38:07.179322   27967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt with IP's: []
	I1104 10:38:07.540538   27967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt ...
	I1104 10:38:07.540570   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt: {Name:mk6aa7552ca33368f073a98292a8c7aa53f742b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.540754   27967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key ...
	I1104 10:38:07.540768   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key: {Name:mk8094241709662feadffcd36b5b489ca95631e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:07.540962   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:38:07.541000   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:38:07.541021   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:38:07.541040   27967 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:38:07.541620   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:38:07.566108   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:38:07.588247   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:38:07.609550   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:38:07.631722   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1104 10:38:07.653441   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 10:38:07.674061   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:38:07.694539   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:38:07.715805   27967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:38:07.737173   27967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 10:38:07.752395   27967 ssh_runner.go:195] Run: openssl version
	I1104 10:38:07.757975   27967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:38:07.768235   27967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:38:07.772516   27967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:38:07.772579   27967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:38:07.777968   27967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:38:07.787877   27967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:38:07.791626   27967 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:38:07.791680   27967 kubeadm.go:392] StartCluster: {Name:addons-746456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-746456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:38:07.791768   27967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 10:38:07.791808   27967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 10:38:07.824226   27967 cri.go:89] found id: ""
	I1104 10:38:07.824299   27967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 10:38:07.833610   27967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 10:38:07.846003   27967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 10:38:07.858921   27967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 10:38:07.858946   27967 kubeadm.go:157] found existing configuration files:
	
	I1104 10:38:07.858995   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 10:38:07.869572   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 10:38:07.869628   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 10:38:07.880432   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 10:38:07.889302   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 10:38:07.889368   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 10:38:07.898007   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 10:38:07.906107   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 10:38:07.906156   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 10:38:07.914542   27967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 10:38:07.922627   27967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 10:38:07.922682   27967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 10:38:07.933698   27967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 10:38:08.106609   27967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 10:38:17.668667   27967 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1104 10:38:17.668742   27967 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 10:38:17.668854   27967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 10:38:17.668981   27967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 10:38:17.669118   27967 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1104 10:38:17.669209   27967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 10:38:17.670831   27967 out.go:235]   - Generating certificates and keys ...
	I1104 10:38:17.670938   27967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 10:38:17.671032   27967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 10:38:17.671139   27967 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 10:38:17.671235   27967 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 10:38:17.671321   27967 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 10:38:17.671402   27967 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 10:38:17.671511   27967 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 10:38:17.671674   27967 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-746456 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I1104 10:38:17.671749   27967 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 10:38:17.671905   27967 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-746456 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I1104 10:38:17.671993   27967 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 10:38:17.672093   27967 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 10:38:17.672185   27967 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 10:38:17.672276   27967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 10:38:17.672355   27967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 10:38:17.672434   27967 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1104 10:38:17.672520   27967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 10:38:17.672582   27967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 10:38:17.672635   27967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 10:38:17.672707   27967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 10:38:17.672766   27967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 10:38:17.674272   27967 out.go:235]   - Booting up control plane ...
	I1104 10:38:17.674364   27967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 10:38:17.674440   27967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 10:38:17.674542   27967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 10:38:17.674699   27967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 10:38:17.674874   27967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 10:38:17.674945   27967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 10:38:17.675091   27967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1104 10:38:17.675257   27967 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1104 10:38:17.675368   27967 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001782742s
	I1104 10:38:17.675465   27967 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1104 10:38:17.675542   27967 kubeadm.go:310] [api-check] The API server is healthy after 5.002028453s
	I1104 10:38:17.675680   27967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1104 10:38:17.675807   27967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1104 10:38:17.675878   27967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1104 10:38:17.676105   27967 kubeadm.go:310] [mark-control-plane] Marking the node addons-746456 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1104 10:38:17.676183   27967 kubeadm.go:310] [bootstrap-token] Using token: hati8t.k5vc0b0z4h6bkmvm
	I1104 10:38:17.678284   27967 out.go:235]   - Configuring RBAC rules ...
	I1104 10:38:17.678410   27967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1104 10:38:17.678508   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1104 10:38:17.678721   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1104 10:38:17.678881   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1104 10:38:17.679000   27967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1104 10:38:17.679143   27967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1104 10:38:17.679303   27967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1104 10:38:17.679364   27967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1104 10:38:17.679428   27967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1104 10:38:17.679437   27967 kubeadm.go:310] 
	I1104 10:38:17.679511   27967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1104 10:38:17.679520   27967 kubeadm.go:310] 
	I1104 10:38:17.679626   27967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1104 10:38:17.679635   27967 kubeadm.go:310] 
	I1104 10:38:17.679657   27967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1104 10:38:17.679707   27967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1104 10:38:17.679783   27967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1104 10:38:17.679796   27967 kubeadm.go:310] 
	I1104 10:38:17.679874   27967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1104 10:38:17.679887   27967 kubeadm.go:310] 
	I1104 10:38:17.679957   27967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1104 10:38:17.679969   27967 kubeadm.go:310] 
	I1104 10:38:17.680044   27967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1104 10:38:17.680148   27967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1104 10:38:17.680250   27967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1104 10:38:17.680265   27967 kubeadm.go:310] 
	I1104 10:38:17.680384   27967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1104 10:38:17.680506   27967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1104 10:38:17.680519   27967 kubeadm.go:310] 
	I1104 10:38:17.680627   27967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hati8t.k5vc0b0z4h6bkmvm \
	I1104 10:38:17.680781   27967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 \
	I1104 10:38:17.680819   27967 kubeadm.go:310] 	--control-plane 
	I1104 10:38:17.680829   27967 kubeadm.go:310] 
	I1104 10:38:17.680933   27967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1104 10:38:17.680944   27967 kubeadm.go:310] 
	I1104 10:38:17.681057   27967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hati8t.k5vc0b0z4h6bkmvm \
	I1104 10:38:17.681184   27967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 
	I1104 10:38:17.681196   27967 cni.go:84] Creating CNI manager for ""
	I1104 10:38:17.681203   27967 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:38:17.683636   27967 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 10:38:17.684930   27967 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 10:38:17.697356   27967 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 10:38:17.716046   27967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 10:38:17.716195   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:17.716224   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-746456 minikube.k8s.io/updated_at=2024_11_04T10_38_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=addons-746456 minikube.k8s.io/primary=true
	I1104 10:38:17.838363   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:17.838366   27967 ops.go:34] apiserver oom_adj: -16
	I1104 10:38:18.339033   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:18.839016   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:19.339277   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:19.839188   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:20.339076   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:20.839029   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:21.338431   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:21.838926   27967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:38:21.914196   27967 kubeadm.go:1113] duration metric: took 4.198038732s to wait for elevateKubeSystemPrivileges
	I1104 10:38:21.914239   27967 kubeadm.go:394] duration metric: took 14.122562515s to StartCluster
	I1104 10:38:21.914261   27967 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:21.914409   27967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:38:21.914766   27967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:38:21.914950   27967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1104 10:38:21.914976   27967 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:38:21.915030   27967 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1104 10:38:21.915167   27967 addons.go:69] Setting yakd=true in profile "addons-746456"
	I1104 10:38:21.915173   27967 config.go:182] Loaded profile config "addons-746456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:38:21.915179   27967 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-746456"
	I1104 10:38:21.915199   27967 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-746456"
	I1104 10:38:21.915199   27967 addons.go:69] Setting cloud-spanner=true in profile "addons-746456"
	I1104 10:38:21.915209   27967 addons.go:69] Setting gcp-auth=true in profile "addons-746456"
	I1104 10:38:21.915219   27967 addons.go:234] Setting addon cloud-spanner=true in "addons-746456"
	I1104 10:38:21.915235   27967 addons.go:69] Setting volcano=true in profile "addons-746456"
	I1104 10:38:21.915200   27967 addons.go:234] Setting addon yakd=true in "addons-746456"
	I1104 10:38:21.915245   27967 addons.go:69] Setting storage-provisioner=true in profile "addons-746456"
	I1104 10:38:21.915241   27967 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-746456"
	I1104 10:38:21.915252   27967 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-746456"
	I1104 10:38:21.915256   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915260   27967 addons.go:234] Setting addon storage-provisioner=true in "addons-746456"
	I1104 10:38:21.915262   27967 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-746456"
	I1104 10:38:21.915263   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915258   27967 addons.go:69] Setting volumesnapshots=true in profile "addons-746456"
	I1104 10:38:21.915282   27967 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-746456"
	I1104 10:38:21.915286   27967 addons.go:234] Setting addon volumesnapshots=true in "addons-746456"
	I1104 10:38:21.915288   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915303   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915315   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915225   27967 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-746456"
	I1104 10:38:21.915237   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915345   27967 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-746456"
	I1104 10:38:21.915556   27967 addons.go:69] Setting inspektor-gadget=true in profile "addons-746456"
	I1104 10:38:21.915573   27967 addons.go:234] Setting addon inspektor-gadget=true in "addons-746456"
	I1104 10:38:21.915614   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915170   27967 addons.go:69] Setting default-storageclass=true in profile "addons-746456"
	I1104 10:38:21.915715   27967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-746456"
	I1104 10:38:21.915731   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915730   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915740   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915740   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915731   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915746   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.915330   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915755   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915758   27967 addons.go:69] Setting ingress-dns=true in profile "addons-746456"
	I1104 10:38:21.915761   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915247   27967 addons.go:234] Setting addon volcano=true in "addons-746456"
	I1104 10:38:21.915770   27967 addons.go:234] Setting addon ingress-dns=true in "addons-746456"
	I1104 10:38:21.915779   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915789   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915800   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.915826   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916064   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916074   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916083   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916098   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916106   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916125   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916133   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916140   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.915235   27967 addons.go:69] Setting registry=true in profile "addons-746456"
	I1104 10:38:21.915761   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916163   27967 addons.go:234] Setting addon registry=true in "addons-746456"
	I1104 10:38:21.915746   27967 addons.go:69] Setting ingress=true in profile "addons-746456"
	I1104 10:38:21.916170   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916177   27967 addons.go:234] Setting addon ingress=true in "addons-746456"
	I1104 10:38:21.915227   27967 mustload.go:65] Loading cluster: addons-746456
	I1104 10:38:21.916184   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916186   27967 addons.go:69] Setting metrics-server=true in profile "addons-746456"
	I1104 10:38:21.916197   27967 addons.go:234] Setting addon metrics-server=true in "addons-746456"
	I1104 10:38:21.916210   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916223   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916328   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.916488   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.916563   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.916743   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.916898   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.916930   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.917115   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.917134   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.917142   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.917157   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.921719   27967 out.go:177] * Verifying Kubernetes components...
	I1104 10:38:21.923510   27967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:38:21.941410   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I1104 10:38:21.941579   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1104 10:38:21.941646   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1104 10:38:21.941709   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I1104 10:38:21.941762   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I1104 10:38:21.941816   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41923
	I1104 10:38:21.942274   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942387   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942448   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942498   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942635   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942703   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.942878   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.942896   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943002   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943016   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943108   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943118   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943206   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943231   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943325   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.943336   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.943378   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.943414   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.943450   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.943787   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.943810   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.944985   27967 config.go:182] Loaded profile config "addons-746456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:38:21.945219   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.945266   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.961540   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.961614   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.961639   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.961733   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.962011   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.962016   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.962038   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.962077   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.962252   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.962267   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.962283   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.964473   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.964519   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.966659   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42965
	I1104 10:38:21.969077   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I1104 10:38:21.969871   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.970357   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.970376   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.970693   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.971245   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.971271   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.972719   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33639
	I1104 10:38:21.973047   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.973526   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.973552   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.974008   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.974532   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.974575   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.981240   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I1104 10:38:21.981597   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.981687   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I1104 10:38:21.982193   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.982469   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.982487   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.982876   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.982951   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1104 10:38:21.983091   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:21.983425   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.983442   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.983812   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.984007   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:21.984941   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.987390   27967 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-746456"
	I1104 10:38:21.987435   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:21.987825   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.987860   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.988070   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:21.988234   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.988247   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.988959   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.989603   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.989632   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.990273   27967 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1104 10:38:21.991601   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1104 10:38:21.991619   27967 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1104 10:38:21.991640   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:21.995245   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:21.995650   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:21.995678   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:21.995941   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:21.996131   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:21.996297   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:21.996450   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:21.997688   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:21.997743   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:21.997975   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:21.998494   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:21.998510   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:21.999149   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:21.999342   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.000259   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I1104 10:38:22.000927   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.002019   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:22.002427   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.002448   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.002730   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.002744   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.002775   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I1104 10:38:22.003255   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.003274   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.003353   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45673
	I1104 10:38:22.003831   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.003870   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.004075   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.004092   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.004161   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.004685   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.017844   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.017904   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.021397   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I1104 10:38:22.021570   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41671
	I1104 10:38:22.021669   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I1104 10:38:22.021875   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.021888   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.022305   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.022519   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.022613   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.023060   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.023082   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.023205   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.023288   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.023307   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.023623   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.023641   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.023707   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.023866   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.023953   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.024009   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.024872   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.024907   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.025398   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I1104 10:38:22.025397   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.026440   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.026584   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.026598   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
	I1104 10:38:22.026954   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.026990   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.027055   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.027338   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.027974   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.028021   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.028415   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.028487   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.028467   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.028708   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43123
	I1104 10:38:22.029102   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.029122   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.030045   27967 addons.go:234] Setting addon default-storageclass=true in "addons-746456"
	I1104 10:38:22.030088   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:22.030535   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.030578   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.030861   27967 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1104 10:38:22.030925   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.030871   27967 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1104 10:38:22.031415   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.031525   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.032687   27967 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1104 10:38:22.032732   27967 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1104 10:38:22.032761   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.032792   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 10:38:22.032802   27967 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 10:38:22.032827   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.033012   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.033038   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.033380   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.033565   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.034851   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I1104 10:38:22.035344   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.035422   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.035911   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.035930   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.036350   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.036573   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I1104 10:38:22.036715   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I1104 10:38:22.036952   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.037299   27967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 10:38:22.037471   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.037522   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.037590   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.037950   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.038093   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.038647   27967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:38:22.038666   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 10:38:22.038705   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.038821   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.039293   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.039457   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.039476   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.040366   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.040719   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.040752   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.040788   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:22.040796   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:22.040856   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.040882   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.041001   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.041078   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.041125   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:22.041167   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:22.041174   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:22.041181   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:22.041188   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:22.041510   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.041766   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.041922   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.042490   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.042541   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.042741   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.042889   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.043506   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1104 10:38:22.043705   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.043744   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.043891   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.044289   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:22.044305   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	W1104 10:38:22.044391   27967 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1104 10:38:22.045865   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.046332   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.046413   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.046549   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.046672   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.046753   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.046826   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.047068   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1104 10:38:22.047433   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1104 10:38:22.047759   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.048749   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.048764   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.049112   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.049246   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.049472   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1104 10:38:22.050649   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.051935   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1104 10:38:22.051940   27967 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1104 10:38:22.053571   27967 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1104 10:38:22.053589   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1104 10:38:22.053608   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.054977   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1104 10:38:22.055286   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.055910   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I1104 10:38:22.056524   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.056903   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.057255   27967 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1104 10:38:22.057424   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.057746   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.057767   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.057594   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.057808   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.057971   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.058169   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1104 10:38:22.058264   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.058306   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I1104 10:38:22.058858   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.059003   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.059070   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.059111   27967 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1104 10:38:22.059124   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1104 10:38:22.059140   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.059357   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.059528   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.059553   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.060552   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.060748   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.061007   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1104 10:38:22.062128   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.062741   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.063050   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.063354   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.063371   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.063398   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.063458   27967 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1104 10:38:22.063459   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1104 10:38:22.063544   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.064207   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.064333   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.064653   27967 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1104 10:38:22.064878   27967 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1104 10:38:22.064893   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1104 10:38:22.064909   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.065694   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1104 10:38:22.065709   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1104 10:38:22.065726   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.066429   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1104 10:38:22.066445   27967 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1104 10:38:22.066463   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.067793   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41257
	I1104 10:38:22.069332   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.070672   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I1104 10:38:22.070910   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.070928   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.071779   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.071850   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.071922   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.071949   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.072084   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.072299   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.072316   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.072737   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.072792   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I1104 10:38:22.072942   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.073941   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.073954   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.074024   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.074023   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.074038   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074040   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074070   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074073   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.074254   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.074304   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.074430   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.074480   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.074487   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.074515   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.074733   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.074732   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.074790   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.074830   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.075067   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.075332   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.075483   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.076463   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I1104 10:38:22.076752   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.076767   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.076816   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.077010   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1104 10:38:22.077386   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.077541   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.077555   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.077960   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:22.077992   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:22.078249   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1104 10:38:22.078372   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.078565   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.079143   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.079625   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.079642   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.080017   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.080062   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1104 10:38:22.080179   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.080337   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.080996   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37775
	I1104 10:38:22.081438   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.081848   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.082046   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.082064   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.082379   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.082772   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.083015   27967 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1104 10:38:22.083801   27967 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1104 10:38:22.083833   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1104 10:38:22.084129   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.084667   27967 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1104 10:38:22.084684   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1104 10:38:22.084698   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.086254   27967 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1104 10:38:22.086275   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1104 10:38:22.086291   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.087060   27967 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1104 10:38:22.087681   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.087931   27967 out.go:177]   - Using image docker.io/registry:2.8.3
	I1104 10:38:22.088164   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.088321   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.088385   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.088534   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.088657   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.088781   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.089600   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.089864   27967 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1104 10:38:22.089880   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1104 10:38:22.089894   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.090127   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.090141   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.090334   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.090627   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.090781   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.090917   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.091172   27967 out.go:177]   - Using image docker.io/busybox:stable
	I1104 10:38:22.092547   27967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1104 10:38:22.092566   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1104 10:38:22.092583   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.092900   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.093320   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.093344   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.093489   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.093667   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.093762   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.093860   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.095729   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.096062   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.096082   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.096220   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.096326   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.096484   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.096590   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.099128   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I1104 10:38:22.099468   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:22.099934   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:22.099950   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:22.100284   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:22.100466   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:22.101880   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:22.102115   27967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 10:38:22.102129   27967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 10:38:22.102144   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:22.105453   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.105915   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:22.105974   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:22.106135   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:22.106268   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:22.106434   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:22.106566   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:22.388670   27967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:38:22.388866   27967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1104 10:38:22.398590   27967 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1104 10:38:22.398611   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1104 10:38:22.453792   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 10:38:22.453817   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1104 10:38:22.455723   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1104 10:38:22.455740   27967 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1104 10:38:22.468989   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1104 10:38:22.469014   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1104 10:38:22.517722   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1104 10:38:22.527473   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1104 10:38:22.547606   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1104 10:38:22.550242   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1104 10:38:22.554589   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 10:38:22.573469   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1104 10:38:22.582104   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1104 10:38:22.595927   27967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1104 10:38:22.595948   27967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1104 10:38:22.627182   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:38:22.637254   27967 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1104 10:38:22.637277   27967 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1104 10:38:22.657320   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1104 10:38:22.657346   27967 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1104 10:38:22.673651   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 10:38:22.673685   27967 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 10:38:22.676060   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1104 10:38:22.676081   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1104 10:38:22.707920   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1104 10:38:22.789617   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1104 10:38:22.789645   27967 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1104 10:38:22.791851   27967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1104 10:38:22.791873   27967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1104 10:38:22.838535   27967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 10:38:22.838561   27967 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 10:38:22.914776   27967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1104 10:38:22.914806   27967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1104 10:38:22.915813   27967 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1104 10:38:22.915834   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1104 10:38:22.943257   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1104 10:38:22.943284   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1104 10:38:22.984815   27967 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1104 10:38:22.984836   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1104 10:38:23.042920   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 10:38:23.043895   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1104 10:38:23.043916   27967 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1104 10:38:23.104575   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1104 10:38:23.170589   27967 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1104 10:38:23.170618   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1104 10:38:23.185105   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1104 10:38:23.219186   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1104 10:38:23.219225   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1104 10:38:23.336819   27967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1104 10:38:23.336849   27967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1104 10:38:23.430474   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1104 10:38:23.578649   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1104 10:38:23.578671   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1104 10:38:23.942247   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1104 10:38:23.942274   27967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1104 10:38:24.302791   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1104 10:38:24.302815   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1104 10:38:24.575251   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1104 10:38:24.575278   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1104 10:38:24.754149   27967 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.365442902s)
	I1104 10:38:24.754213   27967 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.365319916s)
	I1104 10:38:24.754245   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.236492941s)
	I1104 10:38:24.754239   27967 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1104 10:38:24.754281   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:24.754293   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:24.754589   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:24.754632   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:24.754653   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:24.754665   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:24.755197   27967 node_ready.go:35] waiting up to 6m0s for node "addons-746456" to be "Ready" ...
	I1104 10:38:24.755400   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:24.755413   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:24.755438   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:24.760211   27967 node_ready.go:49] node "addons-746456" has status "Ready":"True"
	I1104 10:38:24.760235   27967 node_ready.go:38] duration metric: took 5.018397ms for node "addons-746456" to be "Ready" ...
	I1104 10:38:24.760245   27967 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:38:24.770168   27967 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:24.827056   27967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1104 10:38:24.827086   27967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1104 10:38:25.102897   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1104 10:38:25.259969   27967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-746456" context rescaled to 1 replicas
	I1104 10:38:25.471016   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.943503689s)
	I1104 10:38:25.471348   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.471379   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.471680   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.471704   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.471714   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.471723   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.471938   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.471954   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.574460   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.024189029s)
	I1104 10:38:25.574495   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.019886591s)
	I1104 10:38:25.574510   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574523   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574535   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574547   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574456   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.026792971s)
	I1104 10:38:25.574606   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574622   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574926   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.574951   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.574951   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.574962   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.574973   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.574974   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.574981   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.574986   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.574990   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.574995   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.575013   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.575059   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.575088   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.575119   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.575126   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.575184   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:25.575202   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.575230   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.575405   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.575417   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.576742   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.576756   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:25.673762   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:25.673790   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:25.674056   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:25.674073   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:26.963776   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:27.280067   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.706559624s)
	I1104 10:38:27.280128   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.280141   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.280384   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.280427   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.280439   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.280447   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.280408   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:27.280640   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.280660   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.280662   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:27.404540   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.82238513s)
	I1104 10:38:27.404599   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.404611   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.404662   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.777433817s)
	I1104 10:38:27.404726   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.404740   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.404941   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.404956   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.404964   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.404971   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.405048   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.405058   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.405071   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.405084   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.405143   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:27.405180   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.405191   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.405270   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.405278   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.553684   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:27.553714   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:27.553992   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:27.554027   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:27.554012   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:29.127059   27967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1104 10:38:29.127100   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:29.130388   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.130849   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:29.130874   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.131102   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:29.131375   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:29.131556   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:29.131711   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:29.348654   27967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1104 10:38:29.479550   27967 addons.go:234] Setting addon gcp-auth=true in "addons-746456"
	I1104 10:38:29.479603   27967 host.go:66] Checking if "addons-746456" exists ...
	I1104 10:38:29.480016   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:29.480050   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:29.493600   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:29.495833   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I1104 10:38:29.496335   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:29.496775   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:29.496794   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:29.497217   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:29.497816   27967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:38:29.497857   27967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:38:29.512793   27967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I1104 10:38:29.513278   27967 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:38:29.513759   27967 main.go:141] libmachine: Using API Version  1
	I1104 10:38:29.513783   27967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:38:29.514084   27967 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:38:29.514255   27967 main.go:141] libmachine: (addons-746456) Calling .GetState
	I1104 10:38:29.515840   27967 main.go:141] libmachine: (addons-746456) Calling .DriverName
	I1104 10:38:29.516044   27967 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1104 10:38:29.516063   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHHostname
	I1104 10:38:29.518734   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.519100   27967 main.go:141] libmachine: (addons-746456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:d7:13", ip: ""} in network mk-addons-746456: {Iface:virbr1 ExpiryTime:2024-11-04 11:37:54 +0000 UTC Type:0 Mac:52:54:00:a0:d7:13 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-746456 Clientid:01:52:54:00:a0:d7:13}
	I1104 10:38:29.519125   27967 main.go:141] libmachine: (addons-746456) DBG | domain addons-746456 has defined IP address 192.168.39.4 and MAC address 52:54:00:a0:d7:13 in network mk-addons-746456
	I1104 10:38:29.519268   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHPort
	I1104 10:38:29.519490   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHKeyPath
	I1104 10:38:29.519617   27967 main.go:141] libmachine: (addons-746456) Calling .GetSSHUsername
	I1104 10:38:29.519777   27967 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/addons-746456/id_rsa Username:docker}
	I1104 10:38:30.304546   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.596586448s)
	I1104 10:38:30.304594   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304605   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304608   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.261651532s)
	I1104 10:38:30.304645   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304667   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304705   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.119546607s)
	I1104 10:38:30.304654   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.200043561s)
	I1104 10:38:30.304737   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304737   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.304745   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304748   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.304858   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.874343579s)
	W1104 10:38:30.304893   27967 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1104 10:38:30.304913   27967 retry.go:31] will retry after 295.173531ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1104 10:38:30.305050   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305056   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305058   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305070   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305072   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305079   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305083   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305085   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305086   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305095   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305103   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305110   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305110   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305125   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305125   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305135   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305168   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305180   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.305186   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.305199   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305463   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305479   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305488   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305493   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305497   27967 addons.go:475] Verifying addon registry=true in "addons-746456"
	I1104 10:38:30.305515   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305520   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305526   27967 addons.go:475] Verifying addon metrics-server=true in "addons-746456"
	I1104 10:38:30.305592   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.305601   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.305481   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.305560   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.307439   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.307450   27967 addons.go:475] Verifying addon ingress=true in "addons-746456"
	I1104 10:38:30.305574   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.307869   27967 out.go:177] * Verifying registry addon...
	I1104 10:38:30.307867   27967 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-746456 service yakd-dashboard -n yakd-dashboard
	
	I1104 10:38:30.309006   27967 out.go:177] * Verifying ingress addon...
	I1104 10:38:30.310182   27967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1104 10:38:30.311459   27967 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1104 10:38:30.345538   27967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1104 10:38:30.345560   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:30.346157   27967 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1104 10:38:30.346181   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:30.600262   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1104 10:38:30.826962   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:30.827118   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:30.851348   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.748405797s)
	I1104 10:38:30.851395   27967 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.335329105s)
	I1104 10:38:30.851402   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.851416   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.851724   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:30.851758   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.851767   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.851775   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:30.851785   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:30.852015   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:30.852029   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:30.852038   27967 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-746456"
	I1104 10:38:30.852847   27967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1104 10:38:30.853728   27967 out.go:177] * Verifying csi-hostpath-driver addon...
	I1104 10:38:30.855649   27967 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1104 10:38:30.856708   27967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1104 10:38:30.857056   27967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1104 10:38:30.857071   27967 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1104 10:38:30.886780   27967 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1104 10:38:30.886800   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:30.963086   27967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1104 10:38:30.963114   27967 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1104 10:38:31.039711   27967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1104 10:38:31.039740   27967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1104 10:38:31.086834   27967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1104 10:38:31.315487   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:31.316260   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:31.361772   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:31.776007   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:31.815114   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:31.815572   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:31.861450   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:32.349204   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:32.350157   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:32.418902   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:32.632721   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.032388557s)
	I1104 10:38:32.632783   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.632793   27967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.545924136s)
	I1104 10:38:32.632836   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.632802   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.632856   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.633189   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.633207   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.633238   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.633249   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.633290   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:32.633327   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.633344   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.633359   27967 main.go:141] libmachine: Making call to close driver server
	I1104 10:38:32.633368   27967 main.go:141] libmachine: (addons-746456) Calling .Close
	I1104 10:38:32.634736   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:32.634744   27967 main.go:141] libmachine: (addons-746456) DBG | Closing plugin on server side
	I1104 10:38:32.634762   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.634774   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.634742   27967 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:38:32.634845   27967 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:38:32.637764   27967 addons.go:475] Verifying addon gcp-auth=true in "addons-746456"
	I1104 10:38:32.640411   27967 out.go:177] * Verifying gcp-auth addon...
	I1104 10:38:32.642429   27967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1104 10:38:32.645805   27967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1104 10:38:32.645821   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:32.815767   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:32.816238   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:32.861468   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:33.146052   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:33.318523   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:33.318761   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:33.364934   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:33.646814   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:33.814842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:33.818462   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:33.861363   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:34.145838   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:34.276011   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:34.314925   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:34.316042   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:34.361848   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:34.645573   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:34.814946   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:34.815267   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:34.861791   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:35.145309   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:35.315014   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:35.315759   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:35.361023   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:35.645836   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:35.813871   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:35.816048   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:35.861154   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:36.145678   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:36.315082   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:36.315549   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:36.360965   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:36.646307   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:37.161190   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:37.161260   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:37.162075   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:37.162199   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:37.162740   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:37.323978   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:37.324129   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:37.361435   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:37.647008   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:37.815445   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:37.815739   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:37.861015   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:38.147128   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:38.314167   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:38.315929   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:38.361294   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:38.645639   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:38.815335   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:38.815757   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:38.861587   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:39.145957   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:39.278864   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:39.316221   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:39.316675   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:39.361258   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:39.645753   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:39.814544   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:39.816608   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:39.861834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:40.146481   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:40.405421   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:40.405978   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:40.407279   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:40.646103   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:40.814047   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:40.816540   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:40.861150   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:41.145915   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:41.315530   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:41.316265   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:41.361510   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:41.645706   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:42.066102   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:42.066491   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:42.067551   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:42.071354   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:42.145443   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:42.314734   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:42.315348   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:42.360265   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:42.651277   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:42.814782   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:42.815860   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:42.861142   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:43.146740   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:43.315647   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:43.316007   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:43.362006   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:43.646598   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:43.813334   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:43.815354   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:43.861355   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:44.145698   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:44.276426   27967 pod_ready.go:103] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"False"
	I1104 10:38:44.313477   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:44.315901   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:44.361245   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:44.646663   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:44.813903   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:44.816644   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:44.861510   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:45.146198   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:45.315376   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:45.315959   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:45.361450   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:45.645910   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:45.813641   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:45.815235   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:45.862849   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:46.146404   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:46.275375   27967 pod_ready.go:93] pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.275410   27967 pod_ready.go:82] duration metric: took 21.505206172s for pod "amd-gpu-device-plugin-g59mv" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.275421   27967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.277374   27967 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gx67b" not found
	I1104 10:38:46.277391   27967 pod_ready.go:82] duration metric: took 1.964714ms for pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace to be "Ready" ...
	E1104 10:38:46.277400   27967 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-gx67b" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gx67b" not found
	I1104 10:38:46.277406   27967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hwwcg" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.281385   27967 pod_ready.go:93] pod "coredns-7c65d6cfc9-hwwcg" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.281399   27967 pod_ready.go:82] duration metric: took 3.987491ms for pod "coredns-7c65d6cfc9-hwwcg" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.281413   27967 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.285087   27967 pod_ready.go:93] pod "etcd-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.285104   27967 pod_ready.go:82] duration metric: took 3.684962ms for pod "etcd-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.285111   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.289184   27967 pod_ready.go:93] pod "kube-apiserver-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.289266   27967 pod_ready.go:82] duration metric: took 4.146975ms for pod "kube-apiserver-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.289287   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.313655   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:46.314697   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:46.360986   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:46.479923   27967 pod_ready.go:93] pod "kube-controller-manager-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.479948   27967 pod_ready.go:82] duration metric: took 190.642695ms for pod "kube-controller-manager-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.479961   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s6v2l" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.645873   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:46.816131   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:46.816735   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:46.861334   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:46.872878   27967 pod_ready.go:93] pod "kube-proxy-s6v2l" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:46.872909   27967 pod_ready.go:82] duration metric: took 392.939415ms for pod "kube-proxy-s6v2l" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:46.872922   27967 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.146711   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:47.273100   27967 pod_ready.go:93] pod "kube-scheduler-addons-746456" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:47.273126   27967 pod_ready.go:82] duration metric: took 400.195745ms for pod "kube-scheduler-addons-746456" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.273139   27967 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-646xz" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.314050   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:47.315594   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:47.361170   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:47.645575   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:47.673800   27967 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-646xz" in "kube-system" namespace has status "Ready":"True"
	I1104 10:38:47.673823   27967 pod_ready.go:82] duration metric: took 400.675069ms for pod "nvidia-device-plugin-daemonset-646xz" in "kube-system" namespace to be "Ready" ...
	I1104 10:38:47.673834   27967 pod_ready.go:39] duration metric: took 22.913576674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:38:47.673877   27967 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:38:47.673930   27967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:38:47.709237   27967 api_server.go:72] duration metric: took 25.794219874s to wait for apiserver process to appear ...
	I1104 10:38:47.709263   27967 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:38:47.709285   27967 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1104 10:38:47.713118   27967 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I1104 10:38:47.714082   27967 api_server.go:141] control plane version: v1.31.2
	I1104 10:38:47.714100   27967 api_server.go:131] duration metric: took 4.831792ms to wait for apiserver health ...
	I1104 10:38:47.714107   27967 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:38:47.815357   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:47.816047   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:47.861834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:47.880010   27967 system_pods.go:59] 18 kube-system pods found
	I1104 10:38:47.880037   27967 system_pods.go:61] "amd-gpu-device-plugin-g59mv" [b0defe51-9739-4bbe-b65b-2b4cf8941f5a] Running
	I1104 10:38:47.880043   27967 system_pods.go:61] "coredns-7c65d6cfc9-hwwcg" [82ce98e6-792d-4cf2-80a3-e2e59fd840a1] Running
	I1104 10:38:47.880050   27967 system_pods.go:61] "csi-hostpath-attacher-0" [aba9d3ac-9e13-4702-af54-df0b53064a49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1104 10:38:47.880056   27967 system_pods.go:61] "csi-hostpath-resizer-0" [c7a256af-f053-46a8-99b2-44e43137ec86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1104 10:38:47.880064   27967 system_pods.go:61] "csi-hostpathplugin-jrm6t" [57cc4546-427d-4949-9fc9-3e6dac0b0fd8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1104 10:38:47.880069   27967 system_pods.go:61] "etcd-addons-746456" [1a2daee5-f509-4af0-a1bc-ad20c18ae356] Running
	I1104 10:38:47.880073   27967 system_pods.go:61] "kube-apiserver-addons-746456" [db2cdd30-8040-4f66-838b-80c258b94cbe] Running
	I1104 10:38:47.880077   27967 system_pods.go:61] "kube-controller-manager-addons-746456" [3546adf7-6f14-40e8-96a9-9d8f35428855] Running
	I1104 10:38:47.880087   27967 system_pods.go:61] "kube-ingress-dns-minikube" [34b1a1a6-34cd-43a0-a688-fd9bfcab67c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1104 10:38:47.880093   27967 system_pods.go:61] "kube-proxy-s6v2l" [db7c73f6-c992-4a9f-bab4-299ffd389484] Running
	I1104 10:38:47.880102   27967 system_pods.go:61] "kube-scheduler-addons-746456" [9efc1274-1eb2-4904-a322-6ab4a661222d] Running
	I1104 10:38:47.880109   27967 system_pods.go:61] "metrics-server-84c5f94fbc-7c9jd" [c431d0a4-e34e-4f14-a95d-3223d4486d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 10:38:47.880114   27967 system_pods.go:61] "nvidia-device-plugin-daemonset-646xz" [2de93991-ff75-4ba5-814e-4fbe32bd9b24] Running
	I1104 10:38:47.880122   27967 system_pods.go:61] "registry-66c9cd494c-gh6ft" [8fa29892-d576-414b-9dbb-a78812ace5fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1104 10:38:47.880132   27967 system_pods.go:61] "registry-proxy-r9qc2" [f8e1cbae-d518-45fa-8228-27e32339f030] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1104 10:38:47.880141   27967 system_pods.go:61] "snapshot-controller-56fcc65765-4l5bn" [8ae86336-7bdb-4245-895c-34b46444de04] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:47.880151   27967 system_pods.go:61] "snapshot-controller-56fcc65765-5dbpr" [3e7db880-cb20-42ce-9854-c64a11ee5a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:47.880160   27967 system_pods.go:61] "storage-provisioner" [c7696953-ca67-4d3c-a7ba-6a6538b9589a] Running
	I1104 10:38:47.880169   27967 system_pods.go:74] duration metric: took 166.056255ms to wait for pod list to return data ...
	I1104 10:38:47.880181   27967 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:38:48.073371   27967 default_sa.go:45] found service account: "default"
	I1104 10:38:48.073394   27967 default_sa.go:55] duration metric: took 193.207402ms for default service account to be created ...
	I1104 10:38:48.073402   27967 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:38:48.146857   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:48.279838   27967 system_pods.go:86] 18 kube-system pods found
	I1104 10:38:48.279866   27967 system_pods.go:89] "amd-gpu-device-plugin-g59mv" [b0defe51-9739-4bbe-b65b-2b4cf8941f5a] Running
	I1104 10:38:48.279872   27967 system_pods.go:89] "coredns-7c65d6cfc9-hwwcg" [82ce98e6-792d-4cf2-80a3-e2e59fd840a1] Running
	I1104 10:38:48.279880   27967 system_pods.go:89] "csi-hostpath-attacher-0" [aba9d3ac-9e13-4702-af54-df0b53064a49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1104 10:38:48.279887   27967 system_pods.go:89] "csi-hostpath-resizer-0" [c7a256af-f053-46a8-99b2-44e43137ec86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1104 10:38:48.279894   27967 system_pods.go:89] "csi-hostpathplugin-jrm6t" [57cc4546-427d-4949-9fc9-3e6dac0b0fd8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1104 10:38:48.279951   27967 system_pods.go:89] "etcd-addons-746456" [1a2daee5-f509-4af0-a1bc-ad20c18ae356] Running
	I1104 10:38:48.279977   27967 system_pods.go:89] "kube-apiserver-addons-746456" [db2cdd30-8040-4f66-838b-80c258b94cbe] Running
	I1104 10:38:48.279983   27967 system_pods.go:89] "kube-controller-manager-addons-746456" [3546adf7-6f14-40e8-96a9-9d8f35428855] Running
	I1104 10:38:48.279994   27967 system_pods.go:89] "kube-ingress-dns-minikube" [34b1a1a6-34cd-43a0-a688-fd9bfcab67c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1104 10:38:48.280001   27967 system_pods.go:89] "kube-proxy-s6v2l" [db7c73f6-c992-4a9f-bab4-299ffd389484] Running
	I1104 10:38:48.280006   27967 system_pods.go:89] "kube-scheduler-addons-746456" [9efc1274-1eb2-4904-a322-6ab4a661222d] Running
	I1104 10:38:48.280012   27967 system_pods.go:89] "metrics-server-84c5f94fbc-7c9jd" [c431d0a4-e34e-4f14-a95d-3223d4486d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 10:38:48.280019   27967 system_pods.go:89] "nvidia-device-plugin-daemonset-646xz" [2de93991-ff75-4ba5-814e-4fbe32bd9b24] Running
	I1104 10:38:48.280027   27967 system_pods.go:89] "registry-66c9cd494c-gh6ft" [8fa29892-d576-414b-9dbb-a78812ace5fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1104 10:38:48.280037   27967 system_pods.go:89] "registry-proxy-r9qc2" [f8e1cbae-d518-45fa-8228-27e32339f030] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1104 10:38:48.280050   27967 system_pods.go:89] "snapshot-controller-56fcc65765-4l5bn" [8ae86336-7bdb-4245-895c-34b46444de04] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:48.280062   27967 system_pods.go:89] "snapshot-controller-56fcc65765-5dbpr" [3e7db880-cb20-42ce-9854-c64a11ee5a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1104 10:38:48.280072   27967 system_pods.go:89] "storage-provisioner" [c7696953-ca67-4d3c-a7ba-6a6538b9589a] Running
	I1104 10:38:48.280084   27967 system_pods.go:126] duration metric: took 206.676825ms to wait for k8s-apps to be running ...
	I1104 10:38:48.280095   27967 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:38:48.280142   27967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:38:48.300493   27967 system_svc.go:56] duration metric: took 20.388134ms WaitForService to wait for kubelet
	I1104 10:38:48.300524   27967 kubeadm.go:582] duration metric: took 26.385522166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:38:48.300540   27967 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:38:48.313999   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:48.316348   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:48.360363   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:48.477053   27967 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:38:48.477126   27967 node_conditions.go:123] node cpu capacity is 2
	I1104 10:38:48.477147   27967 node_conditions.go:105] duration metric: took 176.601598ms to run NodePressure ...
	I1104 10:38:48.477162   27967 start.go:241] waiting for startup goroutines ...
	I1104 10:38:48.647417   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:48.813256   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:48.815488   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:48.861009   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:49.146731   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:49.313792   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:49.315113   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:49.667894   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:49.669454   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:49.813914   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:49.815637   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:49.861589   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:50.145477   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:50.313539   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:50.315480   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:50.361058   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:50.646192   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:50.815860   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:50.816533   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:50.861140   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:51.145296   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:51.317687   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:51.318577   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:51.362834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:51.646818   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:51.815337   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:51.815416   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:51.860922   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:52.147402   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:52.318240   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:52.318764   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:52.361386   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:52.645660   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:52.815491   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:52.815578   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:52.860542   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:53.146641   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:53.313640   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:53.314927   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:53.361119   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:53.648301   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:53.814842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:53.816604   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:53.860806   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:54.146312   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:54.315244   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:54.315400   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:54.361278   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:54.646675   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:54.816853   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:54.817978   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:54.864835   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:55.147073   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:55.314032   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:55.315590   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:55.361378   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:55.645834   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:55.816982   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:55.817008   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:55.861337   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:56.146886   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:56.315745   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:56.315769   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:56.361745   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:56.646634   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:56.814853   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:56.816102   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:56.861793   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:57.146977   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:57.315604   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:57.315756   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:57.360809   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:57.646185   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:57.815638   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:57.817025   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:57.861602   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:58.146468   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:58.313704   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:58.315785   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:58.361668   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:58.647726   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:58.813831   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:58.815668   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:58.861244   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:59.146435   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:59.313863   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:59.318161   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:59.361355   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:38:59.646288   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:38:59.816000   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:38:59.816187   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:38:59.861268   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:00.145602   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:00.314073   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:00.315390   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:00.361549   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:00.646089   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:00.814339   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:00.815898   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:00.861015   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:01.145485   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:01.313751   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:01.316482   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:01.360943   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:01.646864   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:01.814950   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:01.814957   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:01.861238   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:02.146239   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:02.316877   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:02.316887   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:02.361820   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:02.646740   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:02.815054   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:02.816500   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:02.861500   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:03.146400   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:03.314470   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:03.316431   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:03.361523   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:03.646118   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:03.816323   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:03.816673   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:03.860921   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:04.146227   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:04.315905   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:04.316023   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:04.362266   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:04.646673   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:04.813901   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:04.816412   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:04.861936   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:05.145682   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:05.328783   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:05.329268   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:05.361783   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:05.645866   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:06.109322   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:06.109936   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:06.111944   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:06.206947   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:06.316258   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:06.316608   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:06.417754   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:06.646104   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:06.813357   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:06.815456   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:06.860868   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:07.147889   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:07.316784   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:07.317219   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:07.363630   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:07.645931   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:07.816175   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:07.816240   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:07.862154   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:08.218990   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:08.314560   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:08.315623   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:08.364481   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:08.645411   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:08.813620   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:08.816475   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:08.861304   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:09.145710   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:09.314161   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:09.315799   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:09.360811   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:09.646105   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:09.814596   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:09.815743   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:09.861298   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:10.145948   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:10.314383   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:10.315764   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:10.361146   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:10.647145   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:10.814123   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:10.815526   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:10.860532   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:11.146612   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:11.315252   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:11.316452   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:11.360463   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:11.645644   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:11.820259   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:11.820337   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:11.862158   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:12.146736   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:12.315165   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:12.315393   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:12.360486   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:12.645944   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:12.813901   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:12.815319   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:12.861356   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:13.145543   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:13.314264   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:13.316036   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:13.361770   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:13.646094   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:13.816490   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:13.817315   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:13.916754   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:14.145827   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:14.314639   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:14.315580   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:14.362250   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:14.647075   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:14.814623   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:14.815772   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:14.861549   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:15.146090   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:15.314395   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:15.317168   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:15.362904   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:15.646444   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:15.815488   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:15.816067   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:15.861558   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:16.145951   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:16.315254   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:16.315913   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:16.361888   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:16.645805   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:16.813903   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1104 10:39:16.816031   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:16.861377   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:17.146815   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:17.318842   27967 kapi.go:107] duration metric: took 47.008658121s to wait for kubernetes.io/minikube-addons=registry ...
	I1104 10:39:17.319015   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:17.361872   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:17.646265   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:17.815780   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:17.860848   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:18.146992   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:18.315654   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:18.545545   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:18.650873   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:18.817320   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:18.862390   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:19.146721   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:19.318091   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:19.420045   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:19.645527   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:19.816110   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:19.862538   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:20.146568   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:20.316355   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:20.641153   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:20.646880   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:20.815534   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:20.917063   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:21.146307   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:21.316803   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:21.369895   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:21.646393   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:21.816281   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:21.861934   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:22.146082   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:22.315899   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:22.362428   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:22.646250   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:22.816663   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:22.860980   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:23.146923   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:23.315610   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:23.360751   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:23.649189   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:23.816026   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:23.861338   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:24.147538   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:24.316487   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:24.362660   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:24.646304   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:24.815786   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:24.860876   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:25.146267   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:25.316228   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:25.364023   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:25.646989   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:25.817034   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:25.921582   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:26.146709   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:26.315133   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:26.362005   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:26.646116   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:26.816057   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:26.862035   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:27.147062   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:27.316095   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:27.361493   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:27.645751   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:27.816655   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:27.860720   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:28.145891   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:28.315400   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:28.361276   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:28.646309   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:28.816082   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:28.861141   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:29.147299   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:29.315839   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:29.361759   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:29.647858   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:29.816888   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:29.861906   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:30.146197   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:30.315964   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:30.361298   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:30.645763   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:30.815779   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:30.860681   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:31.147021   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:31.315394   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:31.360780   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:31.645977   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:31.815551   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:31.860650   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:32.146425   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:32.316360   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:32.362355   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:32.645941   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:32.817609   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:32.923975   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:33.146710   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:33.315626   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:33.360507   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1104 10:39:33.646092   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:33.816510   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:33.861576   27967 kapi.go:107] duration metric: took 1m3.00486764s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1104 10:39:34.146565   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:34.315301   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:34.646171   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:34.816489   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:35.145599   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:35.316430   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:35.647212   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:35.818133   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:36.147956   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:36.316202   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:36.648619   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:36.815719   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:37.147189   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:37.316697   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:37.646856   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:37.815139   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:38.145342   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:38.315828   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:38.646561   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:38.818050   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:39.146360   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:39.316266   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:39.645842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:39.815179   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:40.145937   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:40.315453   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:40.646893   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:40.815300   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:41.229036   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:41.315255   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:41.649342   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:41.816454   27967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1104 10:39:42.146902   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:42.316575   27967 kapi.go:107] duration metric: took 1m12.005114963s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1104 10:39:42.646190   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:43.146842   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:43.646543   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:44.146001   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:44.645491   27967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1104 10:39:45.146499   27967 kapi.go:107] duration metric: took 1m12.504065259s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1104 10:39:45.148256   27967 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-746456 cluster.
	I1104 10:39:45.149641   27967 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1104 10:39:45.150768   27967 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1104 10:39:45.152077   27967 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1104 10:39:45.153248   27967 addons.go:510] duration metric: took 1m23.238217592s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns amd-gpu-device-plugin default-storageclass inspektor-gadget storage-provisioner storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1104 10:39:45.153293   27967 start.go:246] waiting for cluster config update ...
	I1104 10:39:45.153311   27967 start.go:255] writing updated cluster config ...
	I1104 10:39:45.153555   27967 ssh_runner.go:195] Run: rm -f paused
	I1104 10:39:45.204906   27967 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 10:39:45.206645   27967 out.go:177] * Done! kubectl is now configured to use "addons-746456" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.451341638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717148451315682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3b82237-0e32-47c5-af5f-6adef60b9c39 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.451975141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1fc2630-2066-4a95-a72c-dc70e966bc31 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.452028909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1fc2630-2066-4a95-a72c-dc70e966bc31 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.452265011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2770d4ba74d6a6cbb4834920f6a83f4a78443dbc0fefca298d4aa86cc1aa854,PodSandboxId:6bc318d738dbf35df8b4e79ebee664d3ce8c0bf9f4545c92dc3183b7d6c50ea3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730716966658682223,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ldhdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d471ae-ef33-496e-9841-7d205c707c80,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-8
11a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1fc2630-2066-4a95-a72c-dc70e966bc31 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.486867972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09914799-098d-4a09-846b-a494c253af3e name=/runtime.v1.RuntimeService/Version
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.486956100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09914799-098d-4a09-846b-a494c253af3e name=/runtime.v1.RuntimeService/Version
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.489092041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7347f2a-d90d-4c63-ac5c-4571cf16672c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.490225658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717148490198941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7347f2a-d90d-4c63-ac5c-4571cf16672c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.490814748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=425c1a92-ab76-4334-99fe-ee721b6e10dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.490866884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=425c1a92-ab76-4334-99fe-ee721b6e10dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.491113337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2770d4ba74d6a6cbb4834920f6a83f4a78443dbc0fefca298d4aa86cc1aa854,PodSandboxId:6bc318d738dbf35df8b4e79ebee664d3ce8c0bf9f4545c92dc3183b7d6c50ea3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730716966658682223,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ldhdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d471ae-ef33-496e-9841-7d205c707c80,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-8
11a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=425c1a92-ab76-4334-99fe-ee721b6e10dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.530885118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7e0b1f0-801a-4360-85f9-4a72e1c8cce7 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.530971877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7e0b1f0-801a-4360-85f9-4a72e1c8cce7 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.532242486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4eddf3c5-f868-491f-9466-fd1f93d57287 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.533677631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717148533650257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4eddf3c5-f868-491f-9466-fd1f93d57287 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.534268925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2820e6ae-0792-4cc3-a50f-2e11ce9914f2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.534366763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2820e6ae-0792-4cc3-a50f-2e11ce9914f2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.534647850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2770d4ba74d6a6cbb4834920f6a83f4a78443dbc0fefca298d4aa86cc1aa854,PodSandboxId:6bc318d738dbf35df8b4e79ebee664d3ce8c0bf9f4545c92dc3183b7d6c50ea3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730716966658682223,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ldhdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d471ae-ef33-496e-9841-7d205c707c80,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-8
11a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2820e6ae-0792-4cc3-a50f-2e11ce9914f2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.566959294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78d0c76a-f390-4155-b849-782ebbdd84ec name=/runtime.v1.RuntimeService/Version
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.567046393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78d0c76a-f390-4155-b849-782ebbdd84ec name=/runtime.v1.RuntimeService/Version
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.567988594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52064ca9-2c9c-44a5-9b2a-bfce308e9e1d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.569214605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717148569186977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52064ca9-2c9c-44a5-9b2a-bfce308e9e1d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.569773150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2287071-ef96-45ef-85cb-86eb8bc49af8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.569840654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2287071-ef96-45ef-85cb-86eb8bc49af8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:45:48 addons-746456 crio[654]: time="2024-11-04 10:45:48.570091487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2770d4ba74d6a6cbb4834920f6a83f4a78443dbc0fefca298d4aa86cc1aa854,PodSandboxId:6bc318d738dbf35df8b4e79ebee664d3ce8c0bf9f4545c92dc3183b7d6c50ea3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730716966658682223,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ldhdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d471ae-ef33-496e-9841-7d205c707c80,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8247b219665e7af79b49bd811705a6f00d7664e4e6a19b057b565a7419fcca,PodSandboxId:07d9887c045b77b40580d5f537bb1e4fd98735cb712fdef7e37219efdfcdb2cf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730716824544235430,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e748c47-c76c-4e32-a421-8bf0ac2fb2f6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a0933d1011da06e981b3d5a509bfb8f08b4d690e7f8e003abde640bfc7a20a,PodSandboxId:2b901bc38beda3e1cc44ffaa17ae41a1aea0a9903762b28e34cc7472c851d0ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730716789949158008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbb88fd7-9ca0-443f-8
11a-4fb498e9f134,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca0ab3f19cacf965b6ae92bb488de26b67d0e6d4f126dbf7a12c20412f2d7ab,PodSandboxId:7441fd79a6caa23d8de0cc270be08c6bde16f1aa96383e2ad3e66128f583f8f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730716736300926357,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7c9jd,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c431d0a4-e34e-4f14-a95d-3223d4486d7c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f0bb1d840b34350d980761bff477027f88382946432d75ae93f8f88ab79e1e,PodSandboxId:477489465d8e446c8befb47de4f7b75176648f950a188aad9bf04416bc1731b4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730716724912612237,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g59mv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0defe51-9739-4bbe-b65b-2b4cf8941f5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937a061c836e4f55dcbe4ded8cfc61ace0b16d090889344de6647c05a5621b3c,PodSandboxId:eac6eb82fe6b9169d2c640bdedeedb960c65589a79e3cebe1f4bf28b4e718d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730716708237243547,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7696953-ca67-4d3c-a7ba-6a6538b9589a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737466634f337b3fa3,PodSandboxId:8a7c42c912620f50badbab272913bbd7da64acead4b62d4aec6e41af6213ffb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730716705884220687,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hwwcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82ce98e6-792d-4cf2-80a3-e2e59fd840a1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62,PodSandboxId:ce2cee411f82e6c7701905f668a83b9ff4a8baefcac0b49549c379713beb0c23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730716703208979193,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6v2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7c73f6-c992-4a9f-bab4-299ffd389484,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5,PodSandboxId:1229cf81ec6fe9f869051608e3eb17303a9f8905ed7ec9d2320f7bae37d00ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730716691768610331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5037ea39efb47267e351c80eb85421d2,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0,PodSandboxId:1d53b2c4b0afd21c395952fb466c7b15091e38f4c46aa96d1f40f3807a6d500b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730716691770001047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4d821b6fade2fb24822ab63a9657a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01,PodSandboxId:2c9e0009ce343fd64540d89da20303d8f93c7dbaabe7811fb85c2d72e8bc7092,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730716691751330311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3932d85034570fdb4ca99178ea7d10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25,PodSandboxId:d307e637dbda51467eee47aaa737a2d96eb4d154258389904bcb782839402f41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730716691740600539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-746456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d028583daf790ca45711d2f2b6ff7f8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2287071-ef96-45ef-85cb-86eb8bc49af8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2770d4ba74d6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   6bc318d738dbf       hello-world-app-55bf9c44b4-ldhdr
	8c8247b219665       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   07d9887c045b7       nginx
	22a0933d1011d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   2b901bc38beda       busybox
	7ca0ab3f19cac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   7441fd79a6caa       metrics-server-84c5f94fbc-7c9jd
	83f0bb1d840b3       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   477489465d8e4       amd-gpu-device-plugin-g59mv
	937a061c836e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   eac6eb82fe6b9       storage-provisioner
	f21710ba22a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   8a7c42c912620       coredns-7c65d6cfc9-hwwcg
	f6744bb877dd8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   ce2cee411f82e       kube-proxy-s6v2l
	9519febde6ea5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   1d53b2c4b0afd       etcd-addons-746456
	e248e3297d1fd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   1229cf81ec6fe       kube-scheduler-addons-746456
	b37d172943b0d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   2c9e0009ce343       kube-controller-manager-addons-746456
	a207867015660       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   d307e637dbda5       kube-apiserver-addons-746456
	
	
	==> coredns [f21710ba22a25e0b7ba912ffbb2a8216d81babaf26cdc2737466634f337b3fa3] <==
	[INFO] 10.244.0.22:59911 - 60076 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070895s
	[INFO] 10.244.0.22:59911 - 22557 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087612s
	[INFO] 10.244.0.22:59911 - 39542 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000113042s
	[INFO] 10.244.0.22:59911 - 39586 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000112268s
	[INFO] 10.244.0.22:52437 - 37759 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051748s
	[INFO] 10.244.0.22:52437 - 25557 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038668s
	[INFO] 10.244.0.22:52437 - 3666 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035784s
	[INFO] 10.244.0.22:52437 - 50095 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034063s
	[INFO] 10.244.0.22:52437 - 38592 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034095s
	[INFO] 10.244.0.22:52437 - 51264 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032853s
	[INFO] 10.244.0.22:52437 - 9682 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042496s
	[INFO] 10.244.0.22:45066 - 4451 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100734s
	[INFO] 10.244.0.22:45066 - 15376 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098331s
	[INFO] 10.244.0.22:42480 - 19683 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004341s
	[INFO] 10.244.0.22:42480 - 32006 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000052023s
	[INFO] 10.244.0.22:45066 - 18326 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033044s
	[INFO] 10.244.0.22:42480 - 60944 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003822s
	[INFO] 10.244.0.22:45066 - 42261 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030716s
	[INFO] 10.244.0.22:42480 - 53262 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030271s
	[INFO] 10.244.0.22:45066 - 19701 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022621s
	[INFO] 10.244.0.22:42480 - 36597 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034048s
	[INFO] 10.244.0.22:45066 - 6417 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023121s
	[INFO] 10.244.0.22:42480 - 63099 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036262s
	[INFO] 10.244.0.22:45066 - 14222 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000031671s
	[INFO] 10.244.0.22:42480 - 46699 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067644s
	
	
	==> describe nodes <==
	Name:               addons-746456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-746456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=addons-746456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T10_38_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-746456
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:38:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-746456
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:45:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:42:53 +0000   Mon, 04 Nov 2024 10:38:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:42:53 +0000   Mon, 04 Nov 2024 10:38:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:42:53 +0000   Mon, 04 Nov 2024 10:38:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:42:53 +0000   Mon, 04 Nov 2024 10:38:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    addons-746456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 824bf4a601f5426ab4bb582ae703a9d2
	  System UUID:                824bf4a6-01f5-426a-b4bb-582ae703a9d2
	  Boot ID:                    43a61baf-811b-45a7-8f72-715fdd200ed5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  default                     hello-world-app-55bf9c44b4-ldhdr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 amd-gpu-device-plugin-g59mv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 coredns-7c65d6cfc9-hwwcg                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m26s
	  kube-system                 etcd-addons-746456                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m31s
	  kube-system                 kube-apiserver-addons-746456             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 kube-controller-manager-addons-746456    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 kube-proxy-s6v2l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-scheduler-addons-746456             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 metrics-server-84c5f94fbc-7c9jd          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m22s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m24s                  kube-proxy       
	  Normal  Starting                 7m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m37s (x8 over 7m37s)  kubelet          Node addons-746456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s (x8 over 7m37s)  kubelet          Node addons-746456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s (x7 over 7m37s)  kubelet          Node addons-746456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m31s                  kubelet          Node addons-746456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s                  kubelet          Node addons-746456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s                  kubelet          Node addons-746456 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m31s                  kubelet          Node addons-746456 status is now: NodeReady
	  Normal  RegisteredNode           7m27s                  node-controller  Node addons-746456 event: Registered Node addons-746456 in Controller
	
	
	==> dmesg <==
	[  +5.030257] kauditd_printk_skb: 153 callbacks suppressed
	[ +10.298373] kauditd_printk_skb: 64 callbacks suppressed
	[Nov 4 10:39] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.598826] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.667129] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.088108] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.708059] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.467297] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.748897] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.303890] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.582707] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.757078] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.785488] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 4 10:40] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.019674] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.129459] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.001996] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.692326] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.310848] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.316145] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.644215] kauditd_printk_skb: 34 callbacks suppressed
	[Nov 4 10:41] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.950723] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 4 10:42] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.071899] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9519febde6ea5698d00bda0cef2d9f74a934c6d5e398abf71a162a7bca55abc0] <==
	{"level":"info","ts":"2024-11-04T10:39:20.626626Z","caller":"traceutil/trace.go:171","msg":"trace[751242300] linearizableReadLoop","detail":"{readStateIndex:1050; appliedIndex:1050; }","duration":"277.124131ms","start":"2024-11-04T10:39:20.349486Z","end":"2024-11-04T10:39:20.626610Z","steps":["trace[751242300] 'read index received'  (duration: 277.119255ms)","trace[751242300] 'applied index is now lower than readState.Index'  (duration: 4.14µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:39:20.627227Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.867607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:39:20.627267Z","caller":"traceutil/trace.go:171","msg":"trace[1054398833] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1021; }","duration":"269.909743ms","start":"2024-11-04T10:39:20.357348Z","end":"2024-11-04T10:39:20.627258Z","steps":["trace[1054398833] 'agreement among raft nodes before linearized reading'  (duration: 269.856024ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:39:41.214046Z","caller":"traceutil/trace.go:171","msg":"trace[17301424] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"407.959557ms","start":"2024-11-04T10:39:40.806070Z","end":"2024-11-04T10:39:41.214030Z","steps":["trace[17301424] 'process raft request'  (duration: 407.656159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:39:41.216299Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T10:39:40.806057Z","time spent":"408.964021ms","remote":"127.0.0.1:44728","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-11-04T10:40:09.320079Z","caller":"traceutil/trace.go:171","msg":"trace[1466107482] transaction","detail":"{read_only:false; response_revision:1316; number_of_response:1; }","duration":"330.002262ms","start":"2024-11-04T10:40:08.990058Z","end":"2024-11-04T10:40:09.320060Z","steps":["trace[1466107482] 'process raft request'  (duration: 329.825308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:40:09.320243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T10:40:08.990045Z","time spent":"330.123874ms","remote":"127.0.0.1:44834","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1280 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-11-04T10:40:09.320668Z","caller":"traceutil/trace.go:171","msg":"trace[688125957] linearizableReadLoop","detail":"{readStateIndex:1356; appliedIndex:1356; }","duration":"323.106599ms","start":"2024-11-04T10:40:08.997547Z","end":"2024-11-04T10:40:09.320654Z","steps":["trace[688125957] 'read index received'  (duration: 323.101104ms)","trace[688125957] 'applied index is now lower than readState.Index'  (duration: 4.574µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:40:09.320772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.344852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:40:09.320813Z","caller":"traceutil/trace.go:171","msg":"trace[86850128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1316; }","duration":"323.393937ms","start":"2024-11-04T10:40:08.997411Z","end":"2024-11-04T10:40:09.320805Z","steps":["trace[86850128] 'agreement among raft nodes before linearized reading'  (duration: 323.323043ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:40:09.320841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T10:40:08.997371Z","time spent":"323.462652ms","remote":"127.0.0.1:44742","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-11-04T10:40:09.324706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.914071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-11-04T10:40:09.325823Z","caller":"traceutil/trace.go:171","msg":"trace[1506822405] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1317; }","duration":"174.032171ms","start":"2024-11-04T10:40:09.151778Z","end":"2024-11-04T10:40:09.325810Z","steps":["trace[1506822405] 'agreement among raft nodes before linearized reading'  (duration: 172.847378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T10:40:09.326215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.579621ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:40:09.326305Z","caller":"traceutil/trace.go:171","msg":"trace[495762556] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1317; }","duration":"262.632406ms","start":"2024-11-04T10:40:09.063621Z","end":"2024-11-04T10:40:09.326254Z","steps":["trace[495762556] 'agreement among raft nodes before linearized reading'  (duration: 260.648697ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:24.499339Z","caller":"traceutil/trace.go:171","msg":"trace[960604166] linearizableReadLoop","detail":"{readStateIndex:1465; appliedIndex:1464; }","duration":"123.868211ms","start":"2024-11-04T10:40:24.375458Z","end":"2024-11-04T10:40:24.499326Z","steps":["trace[960604166] 'read index received'  (duration: 123.686319ms)","trace[960604166] 'applied index is now lower than readState.Index'  (duration: 181.502µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:40:24.499534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.059239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:40:24.499579Z","caller":"traceutil/trace.go:171","msg":"trace[2098967026] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1417; }","duration":"124.118529ms","start":"2024-11-04T10:40:24.375453Z","end":"2024-11-04T10:40:24.499572Z","steps":["trace[2098967026] 'agreement among raft nodes before linearized reading'  (duration: 124.040367ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:24.499625Z","caller":"traceutil/trace.go:171","msg":"trace[1274175943] transaction","detail":"{read_only:false; response_revision:1417; number_of_response:1; }","duration":"149.228373ms","start":"2024-11-04T10:40:24.350384Z","end":"2024-11-04T10:40:24.499612Z","steps":["trace[1274175943] 'process raft request'  (duration: 148.82834ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:37.692221Z","caller":"traceutil/trace.go:171","msg":"trace[887757847] transaction","detail":"{read_only:false; response_revision:1493; number_of_response:1; }","duration":"162.406133ms","start":"2024-11-04T10:40:37.529799Z","end":"2024-11-04T10:40:37.692205Z","steps":["trace[887757847] 'process raft request'  (duration: 162.251205ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:40:37.766262Z","caller":"traceutil/trace.go:171","msg":"trace[1915448977] transaction","detail":"{read_only:false; response_revision:1494; number_of_response:1; }","duration":"126.900823ms","start":"2024-11-04T10:40:37.639346Z","end":"2024-11-04T10:40:37.766247Z","steps":["trace[1915448977] 'process raft request'  (duration: 120.863274ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:41:04.121740Z","caller":"traceutil/trace.go:171","msg":"trace[1620520275] linearizableReadLoop","detail":"{readStateIndex:1813; appliedIndex:1812; }","duration":"139.992221ms","start":"2024-11-04T10:41:03.981724Z","end":"2024-11-04T10:41:04.121717Z","steps":["trace[1620520275] 'read index received'  (duration: 139.864148ms)","trace[1620520275] 'applied index is now lower than readState.Index'  (duration: 127.334µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T10:41:04.122110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.367844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-resizer-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T10:41:04.122146Z","caller":"traceutil/trace.go:171","msg":"trace[1955318676] range","detail":"{range_begin:/registry/clusterroles/external-resizer-runner; range_end:; response_count:0; response_revision:1751; }","duration":"140.41307ms","start":"2024-11-04T10:41:03.981719Z","end":"2024-11-04T10:41:04.122132Z","steps":["trace[1955318676] 'agreement among raft nodes before linearized reading'  (duration: 140.345887ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T10:41:04.121921Z","caller":"traceutil/trace.go:171","msg":"trace[195758964] transaction","detail":"{read_only:false; response_revision:1751; number_of_response:1; }","duration":"295.707696ms","start":"2024-11-04T10:41:03.826201Z","end":"2024-11-04T10:41:04.121909Z","steps":["trace[195758964] 'process raft request'  (duration: 295.419514ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:45:48 up 8 min,  0 users,  load average: 0.09, 0.50, 0.38
	Linux addons-746456 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a2078670156607445e3f69e0c7d2edf82ea10c4a02877028154c691b079b3e25] <==
	E1104 10:39:57.198734       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.199.111:443: connect: connection refused" logger="UnhandledError"
	E1104 10:39:57.203177       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.199.111:443: connect: connection refused" logger="UnhandledError"
	E1104 10:39:57.207996       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.199.111:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.199.111:443: connect: connection refused" logger="UnhandledError"
	I1104 10:39:57.274726       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1104 10:40:04.752617       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.63.69"}
	I1104 10:40:21.924570       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1104 10:40:22.108458       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.199.171"}
	I1104 10:40:26.843205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1104 10:40:27.970112       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1104 10:40:44.357954       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1104 10:40:59.877756       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.877792       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:40:59.904246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.904279       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:40:59.926706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.926762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:40:59.966887       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:40:59.966992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1104 10:41:00.019514       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1104 10:41:00.019651       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1104 10:41:00.967256       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1104 10:41:01.020562       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1104 10:41:01.063092       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E1104 10:41:04.758097       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1104 10:42:44.266505       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.226.196"}
	
	
	==> kube-controller-manager [b37d172943b0ded3845df617c273978f49a44cb3cbbf8228c8bd37f84ebd8d01] <==
	E1104 10:43:48.344692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:43:50.671604       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:43:50.671653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:44:02.394494       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:44:02.394545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:44:13.403577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:44:13.403623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:44:23.686073       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:44:23.686125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:44:27.294192       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:44:27.294238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:44:58.743792       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:44:58.743934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:45:02.770932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:45:02.770992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:45:06.657625       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:45:06.657677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:45:13.865388       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:45:13.865462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:45:41.826292       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:45:41.826341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:45:47.377015       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:45:47.377119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1104 10:45:47.914198       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1104 10:45:47.914267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f6744bb877dd8872eaf6f3be107bfe149f989a2a495d09a2c1969a4438d36e62] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 10:38:24.021958       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 10:38:24.033132       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E1104 10:38:24.033192       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 10:38:24.123673       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 10:38:24.123744       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 10:38:24.123777       1 server_linux.go:169] "Using iptables Proxier"
	I1104 10:38:24.132353       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 10:38:24.132636       1 server.go:483] "Version info" version="v1.31.2"
	I1104 10:38:24.132662       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 10:38:24.136027       1 config.go:199] "Starting service config controller"
	I1104 10:38:24.136064       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 10:38:24.136116       1 config.go:105] "Starting endpoint slice config controller"
	I1104 10:38:24.136121       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 10:38:24.143300       1 config.go:328] "Starting node config controller"
	I1104 10:38:24.143335       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 10:38:24.237060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 10:38:24.237138       1 shared_informer.go:320] Caches are synced for service config
	I1104 10:38:24.244934       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e248e3297d1fda4a00b162a352356438ee94390c14eb5308505a4e49043096b5] <==
	W1104 10:38:15.180272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.180316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.195094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1104 10:38:15.195211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.229202       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1104 10:38:15.229247       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1104 10:38:15.269472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1104 10:38:15.269508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.298741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1104 10:38:15.298920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.381712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1104 10:38:15.381760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.391842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.391950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.402986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.403119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.429914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1104 10:38:15.430050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.495323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1104 10:38:15.495450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.562639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1104 10:38:15.562734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1104 10:38:15.603526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1104 10:38:15.603571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1104 10:38:17.752620       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 10:44:27 addons-746456 kubelet[1190]: E1104 10:44:27.816664    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717067816299912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:44:27 addons-746456 kubelet[1190]: E1104 10:44:27.816701    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717067816299912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:44:37 addons-746456 kubelet[1190]: E1104 10:44:37.818977    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717077818381247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:44:37 addons-746456 kubelet[1190]: E1104 10:44:37.819045    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717077818381247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:44:40 addons-746456 kubelet[1190]: I1104 10:44:40.955620    1190 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 04 10:44:47 addons-746456 kubelet[1190]: E1104 10:44:47.821638    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717087821229840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:44:47 addons-746456 kubelet[1190]: E1104 10:44:47.821932    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717087821229840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:44:48 addons-746456 kubelet[1190]: I1104 10:44:48.955771    1190 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-g59mv" secret="" err="secret \"gcp-auth\" not found"
	Nov 04 10:44:57 addons-746456 kubelet[1190]: E1104 10:44:57.824290    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717097823779276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:44:57 addons-746456 kubelet[1190]: E1104 10:44:57.824764    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717097823779276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:07 addons-746456 kubelet[1190]: E1104 10:45:07.827984    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717107827381446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:07 addons-746456 kubelet[1190]: E1104 10:45:07.828301    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717107827381446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:16 addons-746456 kubelet[1190]: E1104 10:45:16.968990    1190 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 10:45:16 addons-746456 kubelet[1190]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 10:45:16 addons-746456 kubelet[1190]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 10:45:16 addons-746456 kubelet[1190]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 10:45:16 addons-746456 kubelet[1190]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 10:45:17 addons-746456 kubelet[1190]: E1104 10:45:17.833258    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717117832773979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:17 addons-746456 kubelet[1190]: E1104 10:45:17.833283    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717117832773979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:27 addons-746456 kubelet[1190]: E1104 10:45:27.835773    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717127835326008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:27 addons-746456 kubelet[1190]: E1104 10:45:27.835817    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717127835326008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:37 addons-746456 kubelet[1190]: E1104 10:45:37.838482    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717137838024812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:37 addons-746456 kubelet[1190]: E1104 10:45:37.838819    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717137838024812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:47 addons-746456 kubelet[1190]: E1104 10:45:47.841123    1190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717147840749949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:45:47 addons-746456 kubelet[1190]: E1104 10:45:47.841167    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717147840749949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603350,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [937a061c836e4f55dcbe4ded8cfc61ace0b16d090889344de6647c05a5621b3c] <==
	I1104 10:38:28.574713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 10:38:28.590316       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 10:38:28.590379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 10:38:28.610704       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 10:38:28.611513       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-746456_38e99173-d61b-4158-83c8-1b141f1705e4!
	I1104 10:38:28.612014       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"288eb3de-b2e3-4aa2-a502-19d22fabbb8b", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-746456_38e99173-d61b-4158-83c8-1b141f1705e4 became leader
	I1104 10:38:28.711788       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-746456_38e99173-d61b-4158-83c8-1b141f1705e4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-746456 -n addons-746456
helpers_test.go:261: (dbg) Run:  kubectl --context addons-746456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (346.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-746456
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-746456: exit status 82 (2m0.459078304s)

                                                
                                                
-- stdout --
	* Stopping node "addons-746456"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-746456" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-746456
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-746456: exit status 11 (21.477114594s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-746456" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-746456
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-746456: exit status 11 (6.143400592s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-746456" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-746456
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-746456: exit status 11 (6.143385337s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-746456" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (149.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 node stop m02 -v=7 --alsologtostderr
E1104 10:56:53.659044   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:57:14.141164   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:57:55.102907   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-931571 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.472478397s)

                                                
                                                
-- stdout --
	* Stopping node "ha-931571-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 10:56:46.214613   41763 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:56:46.214763   41763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:56:46.214773   41763 out.go:358] Setting ErrFile to fd 2...
	I1104 10:56:46.214777   41763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:56:46.214962   41763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:56:46.215220   41763 mustload.go:65] Loading cluster: ha-931571
	I1104 10:56:46.215651   41763 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:56:46.215667   41763 stop.go:39] StopHost: ha-931571-m02
	I1104 10:56:46.216045   41763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:56:46.216091   41763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:56:46.231998   41763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1104 10:56:46.232517   41763 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:56:46.233020   41763 main.go:141] libmachine: Using API Version  1
	I1104 10:56:46.233041   41763 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:56:46.233410   41763 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:56:46.235758   41763 out.go:177] * Stopping node "ha-931571-m02"  ...
	I1104 10:56:46.237026   41763 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1104 10:56:46.237067   41763 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:56:46.237289   41763 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1104 10:56:46.237325   41763 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:56:46.240294   41763 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:56:46.240698   41763 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:56:46.240729   41763 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:56:46.240889   41763 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:56:46.241043   41763 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:56:46.241171   41763 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:56:46.241317   41763 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:56:46.335772   41763 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1104 10:56:46.388226   41763 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1104 10:56:46.441924   41763 main.go:141] libmachine: Stopping "ha-931571-m02"...
	I1104 10:56:46.441987   41763 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 10:56:46.443917   41763 main.go:141] libmachine: (ha-931571-m02) Calling .Stop
	I1104 10:56:46.448014   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 0/120
	I1104 10:56:47.449428   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 1/120
	I1104 10:56:48.451655   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 2/120
	I1104 10:56:49.453006   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 3/120
	I1104 10:56:50.454568   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 4/120
	I1104 10:56:51.456420   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 5/120
	I1104 10:56:52.457867   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 6/120
	I1104 10:56:53.459541   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 7/120
	I1104 10:56:54.460803   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 8/120
	I1104 10:56:55.462197   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 9/120
	I1104 10:56:56.464410   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 10/120
	I1104 10:56:57.465591   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 11/120
	I1104 10:56:58.467858   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 12/120
	I1104 10:56:59.468995   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 13/120
	I1104 10:57:00.470396   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 14/120
	I1104 10:57:01.472332   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 15/120
	I1104 10:57:02.473653   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 16/120
	I1104 10:57:03.475599   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 17/120
	I1104 10:57:04.476882   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 18/120
	I1104 10:57:05.478308   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 19/120
	I1104 10:57:06.480499   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 20/120
	I1104 10:57:07.481912   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 21/120
	I1104 10:57:08.483608   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 22/120
	I1104 10:57:09.484964   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 23/120
	I1104 10:57:10.486155   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 24/120
	I1104 10:57:11.487930   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 25/120
	I1104 10:57:12.490197   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 26/120
	I1104 10:57:13.491524   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 27/120
	I1104 10:57:14.492820   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 28/120
	I1104 10:57:15.494037   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 29/120
	I1104 10:57:16.496495   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 30/120
	I1104 10:57:17.497864   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 31/120
	I1104 10:57:18.499697   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 32/120
	I1104 10:57:19.501824   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 33/120
	I1104 10:57:20.503689   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 34/120
	I1104 10:57:21.505792   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 35/120
	I1104 10:57:22.507713   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 36/120
	I1104 10:57:23.508896   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 37/120
	I1104 10:57:24.510225   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 38/120
	I1104 10:57:25.511928   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 39/120
	I1104 10:57:26.513649   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 40/120
	I1104 10:57:27.515581   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 41/120
	I1104 10:57:28.517862   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 42/120
	I1104 10:57:29.519558   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 43/120
	I1104 10:57:30.520774   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 44/120
	I1104 10:57:31.522563   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 45/120
	I1104 10:57:32.523810   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 46/120
	I1104 10:57:33.525132   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 47/120
	I1104 10:57:34.526662   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 48/120
	I1104 10:57:35.527919   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 49/120
	I1104 10:57:36.529939   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 50/120
	I1104 10:57:37.531633   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 51/120
	I1104 10:57:38.533004   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 52/120
	I1104 10:57:39.534457   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 53/120
	I1104 10:57:40.536048   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 54/120
	I1104 10:57:41.537912   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 55/120
	I1104 10:57:42.539705   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 56/120
	I1104 10:57:43.541397   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 57/120
	I1104 10:57:44.543818   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 58/120
	I1104 10:57:45.545377   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 59/120
	I1104 10:57:46.547458   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 60/120
	I1104 10:57:47.548932   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 61/120
	I1104 10:57:48.550378   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 62/120
	I1104 10:57:49.551934   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 63/120
	I1104 10:57:50.553052   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 64/120
	I1104 10:57:51.554876   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 65/120
	I1104 10:57:52.556206   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 66/120
	I1104 10:57:53.558616   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 67/120
	I1104 10:57:54.560079   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 68/120
	I1104 10:57:55.561458   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 69/120
	I1104 10:57:56.563462   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 70/120
	I1104 10:57:57.564753   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 71/120
	I1104 10:57:58.566322   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 72/120
	I1104 10:57:59.567889   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 73/120
	I1104 10:58:00.569208   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 74/120
	I1104 10:58:01.571060   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 75/120
	I1104 10:58:02.572264   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 76/120
	I1104 10:58:03.573637   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 77/120
	I1104 10:58:04.575739   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 78/120
	I1104 10:58:05.577194   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 79/120
	I1104 10:58:06.579132   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 80/120
	I1104 10:58:07.580555   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 81/120
	I1104 10:58:08.582087   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 82/120
	I1104 10:58:09.583445   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 83/120
	I1104 10:58:10.584864   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 84/120
	I1104 10:58:11.586949   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 85/120
	I1104 10:58:12.588461   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 86/120
	I1104 10:58:13.590059   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 87/120
	I1104 10:58:14.591359   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 88/120
	I1104 10:58:15.592741   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 89/120
	I1104 10:58:16.594849   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 90/120
	I1104 10:58:17.597156   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 91/120
	I1104 10:58:18.598730   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 92/120
	I1104 10:58:19.599984   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 93/120
	I1104 10:58:20.601561   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 94/120
	I1104 10:58:21.603028   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 95/120
	I1104 10:58:22.604560   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 96/120
	I1104 10:58:23.605950   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 97/120
	I1104 10:58:24.607811   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 98/120
	I1104 10:58:25.609209   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 99/120
	I1104 10:58:26.611212   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 100/120
	I1104 10:58:27.612941   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 101/120
	I1104 10:58:28.614333   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 102/120
	I1104 10:58:29.615772   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 103/120
	I1104 10:58:30.617365   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 104/120
	I1104 10:58:31.618901   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 105/120
	I1104 10:58:32.620109   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 106/120
	I1104 10:58:33.621449   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 107/120
	I1104 10:58:34.623712   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 108/120
	I1104 10:58:35.624939   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 109/120
	I1104 10:58:36.627336   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 110/120
	I1104 10:58:37.628714   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 111/120
	I1104 10:58:38.630177   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 112/120
	I1104 10:58:39.631789   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 113/120
	I1104 10:58:40.633238   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 114/120
	I1104 10:58:41.635190   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 115/120
	I1104 10:58:42.636814   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 116/120
	I1104 10:58:43.638232   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 117/120
	I1104 10:58:44.640062   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 118/120
	I1104 10:58:45.641579   41763 main.go:141] libmachine: (ha-931571-m02) Waiting for machine to stop 119/120
	I1104 10:58:46.642715   41763 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1104 10:58:46.642935   41763 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-931571 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr: (26.943460386s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.292937325s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m03_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:52:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:52:21.364935   37715 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:52:21.365025   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365032   37715 out.go:358] Setting ErrFile to fd 2...
	I1104 10:52:21.365036   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365213   37715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:52:21.365784   37715 out.go:352] Setting JSON to false
	I1104 10:52:21.366601   37715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5692,"bootTime":1730711849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:52:21.366686   37715 start.go:139] virtualization: kvm guest
	I1104 10:52:21.368805   37715 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:52:21.370048   37715 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:52:21.370105   37715 notify.go:220] Checking for updates...
	I1104 10:52:21.372521   37715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:52:21.373968   37715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:52:21.375378   37715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.376837   37715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:52:21.378230   37715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:52:21.379614   37715 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:52:21.414672   37715 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 10:52:21.416078   37715 start.go:297] selected driver: kvm2
	I1104 10:52:21.416092   37715 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:52:21.416103   37715 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:52:21.416883   37715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.416970   37715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:52:21.432886   37715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:52:21.432946   37715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:52:21.433171   37715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:52:21.433208   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:21.433267   37715 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1104 10:52:21.433278   37715 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1104 10:52:21.433324   37715 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:21.433412   37715 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.435216   37715 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 10:52:21.436574   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:21.436609   37715 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:52:21.436618   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:52:21.436693   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:52:21.436705   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:52:21.436992   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:21.437018   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json: {Name:mke118782614f4d89fa0f6507dfdc64c536a0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:21.437163   37715 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:52:21.437221   37715 start.go:364] duration metric: took 42.218µs to acquireMachinesLock for "ha-931571"
	I1104 10:52:21.437267   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:52:21.437337   37715 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 10:52:21.438936   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:52:21.439063   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:52:21.439107   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:52:21.453699   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1104 10:52:21.454132   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:52:21.454653   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:52:21.454675   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:52:21.455002   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:52:21.455150   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:21.455275   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:21.455438   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:52:21.455470   37715 client.go:168] LocalClient.Create starting
	I1104 10:52:21.455500   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:52:21.455528   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455541   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455581   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:52:21.455599   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455610   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455624   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:52:21.455633   37715 main.go:141] libmachine: (ha-931571) Calling .PreCreateCheck
	I1104 10:52:21.455911   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:21.456291   37715 main.go:141] libmachine: Creating machine...
	I1104 10:52:21.456304   37715 main.go:141] libmachine: (ha-931571) Calling .Create
	I1104 10:52:21.456440   37715 main.go:141] libmachine: (ha-931571) Creating KVM machine...
	I1104 10:52:21.457741   37715 main.go:141] libmachine: (ha-931571) DBG | found existing default KVM network
	I1104 10:52:21.458392   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.458262   37738 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1104 10:52:21.458442   37715 main.go:141] libmachine: (ha-931571) DBG | created network xml: 
	I1104 10:52:21.458465   37715 main.go:141] libmachine: (ha-931571) DBG | <network>
	I1104 10:52:21.458474   37715 main.go:141] libmachine: (ha-931571) DBG |   <name>mk-ha-931571</name>
	I1104 10:52:21.458487   37715 main.go:141] libmachine: (ha-931571) DBG |   <dns enable='no'/>
	I1104 10:52:21.458498   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458510   37715 main.go:141] libmachine: (ha-931571) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1104 10:52:21.458517   37715 main.go:141] libmachine: (ha-931571) DBG |     <dhcp>
	I1104 10:52:21.458526   37715 main.go:141] libmachine: (ha-931571) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1104 10:52:21.458536   37715 main.go:141] libmachine: (ha-931571) DBG |     </dhcp>
	I1104 10:52:21.458547   37715 main.go:141] libmachine: (ha-931571) DBG |   </ip>
	I1104 10:52:21.458556   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458566   37715 main.go:141] libmachine: (ha-931571) DBG | </network>
	I1104 10:52:21.458577   37715 main.go:141] libmachine: (ha-931571) DBG | 
	I1104 10:52:21.463306   37715 main.go:141] libmachine: (ha-931571) DBG | trying to create private KVM network mk-ha-931571 192.168.39.0/24...
	I1104 10:52:21.529269   37715 main.go:141] libmachine: (ha-931571) DBG | private KVM network mk-ha-931571 192.168.39.0/24 created
	I1104 10:52:21.529311   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.529188   37738 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.529329   37715 main.go:141] libmachine: (ha-931571) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.529347   37715 main.go:141] libmachine: (ha-931571) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:52:21.529364   37715 main.go:141] libmachine: (ha-931571) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:52:21.775859   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.775727   37738 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa...
	I1104 10:52:21.860057   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.859924   37738 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk...
	I1104 10:52:21.860086   37715 main.go:141] libmachine: (ha-931571) DBG | Writing magic tar header
	I1104 10:52:21.860102   37715 main.go:141] libmachine: (ha-931571) DBG | Writing SSH key tar header
	I1104 10:52:21.860115   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.860035   37738 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.860131   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571
	I1104 10:52:21.860191   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:52:21.860213   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.860225   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 (perms=drwx------)
	I1104 10:52:21.860235   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:52:21.860254   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:52:21.860267   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:52:21.860276   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:52:21.860287   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home
	I1104 10:52:21.860298   37715 main.go:141] libmachine: (ha-931571) DBG | Skipping /home - not owner
	I1104 10:52:21.860370   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:52:21.860424   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:52:21.860440   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:52:21.860450   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:52:21.860468   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:21.861289   37715 main.go:141] libmachine: (ha-931571) define libvirt domain using xml: 
	I1104 10:52:21.861306   37715 main.go:141] libmachine: (ha-931571) <domain type='kvm'>
	I1104 10:52:21.861313   37715 main.go:141] libmachine: (ha-931571)   <name>ha-931571</name>
	I1104 10:52:21.861320   37715 main.go:141] libmachine: (ha-931571)   <memory unit='MiB'>2200</memory>
	I1104 10:52:21.861328   37715 main.go:141] libmachine: (ha-931571)   <vcpu>2</vcpu>
	I1104 10:52:21.861340   37715 main.go:141] libmachine: (ha-931571)   <features>
	I1104 10:52:21.861356   37715 main.go:141] libmachine: (ha-931571)     <acpi/>
	I1104 10:52:21.861372   37715 main.go:141] libmachine: (ha-931571)     <apic/>
	I1104 10:52:21.861380   37715 main.go:141] libmachine: (ha-931571)     <pae/>
	I1104 10:52:21.861396   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861404   37715 main.go:141] libmachine: (ha-931571)   </features>
	I1104 10:52:21.861416   37715 main.go:141] libmachine: (ha-931571)   <cpu mode='host-passthrough'>
	I1104 10:52:21.861423   37715 main.go:141] libmachine: (ha-931571)   
	I1104 10:52:21.861426   37715 main.go:141] libmachine: (ha-931571)   </cpu>
	I1104 10:52:21.861433   37715 main.go:141] libmachine: (ha-931571)   <os>
	I1104 10:52:21.861437   37715 main.go:141] libmachine: (ha-931571)     <type>hvm</type>
	I1104 10:52:21.861444   37715 main.go:141] libmachine: (ha-931571)     <boot dev='cdrom'/>
	I1104 10:52:21.861448   37715 main.go:141] libmachine: (ha-931571)     <boot dev='hd'/>
	I1104 10:52:21.861452   37715 main.go:141] libmachine: (ha-931571)     <bootmenu enable='no'/>
	I1104 10:52:21.861458   37715 main.go:141] libmachine: (ha-931571)   </os>
	I1104 10:52:21.861462   37715 main.go:141] libmachine: (ha-931571)   <devices>
	I1104 10:52:21.861469   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='cdrom'>
	I1104 10:52:21.861476   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/boot2docker.iso'/>
	I1104 10:52:21.861488   37715 main.go:141] libmachine: (ha-931571)       <target dev='hdc' bus='scsi'/>
	I1104 10:52:21.861492   37715 main.go:141] libmachine: (ha-931571)       <readonly/>
	I1104 10:52:21.861495   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861500   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='disk'>
	I1104 10:52:21.861506   37715 main.go:141] libmachine: (ha-931571)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:52:21.861513   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk'/>
	I1104 10:52:21.861520   37715 main.go:141] libmachine: (ha-931571)       <target dev='hda' bus='virtio'/>
	I1104 10:52:21.861524   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861533   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861538   37715 main.go:141] libmachine: (ha-931571)       <source network='mk-ha-931571'/>
	I1104 10:52:21.861547   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861557   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861566   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861571   37715 main.go:141] libmachine: (ha-931571)       <source network='default'/>
	I1104 10:52:21.861580   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861584   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861591   37715 main.go:141] libmachine: (ha-931571)     <serial type='pty'>
	I1104 10:52:21.861645   37715 main.go:141] libmachine: (ha-931571)       <target port='0'/>
	I1104 10:52:21.861685   37715 main.go:141] libmachine: (ha-931571)     </serial>
	I1104 10:52:21.861703   37715 main.go:141] libmachine: (ha-931571)     <console type='pty'>
	I1104 10:52:21.861714   37715 main.go:141] libmachine: (ha-931571)       <target type='serial' port='0'/>
	I1104 10:52:21.861735   37715 main.go:141] libmachine: (ha-931571)     </console>
	I1104 10:52:21.861744   37715 main.go:141] libmachine: (ha-931571)     <rng model='virtio'>
	I1104 10:52:21.861753   37715 main.go:141] libmachine: (ha-931571)       <backend model='random'>/dev/random</backend>
	I1104 10:52:21.861765   37715 main.go:141] libmachine: (ha-931571)     </rng>
	I1104 10:52:21.861773   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861783   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861791   37715 main.go:141] libmachine: (ha-931571)   </devices>
	I1104 10:52:21.861799   37715 main.go:141] libmachine: (ha-931571) </domain>
	I1104 10:52:21.861809   37715 main.go:141] libmachine: (ha-931571) 
	I1104 10:52:21.865935   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:cf:c5:1d in network default
	I1104 10:52:21.866504   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:21.866522   37715 main.go:141] libmachine: (ha-931571) Ensuring networks are active...
	I1104 10:52:21.866948   37715 main.go:141] libmachine: (ha-931571) Ensuring network default is active
	I1104 10:52:21.867232   37715 main.go:141] libmachine: (ha-931571) Ensuring network mk-ha-931571 is active
	I1104 10:52:21.867627   37715 main.go:141] libmachine: (ha-931571) Getting domain xml...
	I1104 10:52:21.868256   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:23.049161   37715 main.go:141] libmachine: (ha-931571) Waiting to get IP...
	I1104 10:52:23.050233   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.050623   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.050643   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.050602   37738 retry.go:31] will retry after 245.530574ms: waiting for machine to come up
	I1104 10:52:23.298185   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.298678   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.298704   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.298589   37738 retry.go:31] will retry after 317.376406ms: waiting for machine to come up
	I1104 10:52:23.617020   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.617577   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.617605   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.617514   37738 retry.go:31] will retry after 370.038267ms: waiting for machine to come up
	I1104 10:52:23.988831   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.989190   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.989220   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.989148   37738 retry.go:31] will retry after 538.152632ms: waiting for machine to come up
	I1104 10:52:24.528804   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:24.529210   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:24.529252   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:24.529162   37738 retry.go:31] will retry after 731.07349ms: waiting for machine to come up
	I1104 10:52:25.262048   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:25.262502   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:25.262519   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:25.262462   37738 retry.go:31] will retry after 741.011273ms: waiting for machine to come up
	I1104 10:52:26.005553   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.005942   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.005976   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.005909   37738 retry.go:31] will retry after 743.777795ms: waiting for machine to come up
	I1104 10:52:26.751254   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.751560   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.751581   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.751519   37738 retry.go:31] will retry after 895.955115ms: waiting for machine to come up
	I1104 10:52:27.648705   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:27.649070   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:27.649096   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:27.649040   37738 retry.go:31] will retry after 1.225419017s: waiting for machine to come up
	I1104 10:52:28.876413   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:28.876806   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:28.876829   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:28.876782   37738 retry.go:31] will retry after 1.631823926s: waiting for machine to come up
	I1104 10:52:30.510636   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:30.511147   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:30.511177   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:30.511093   37738 retry.go:31] will retry after 1.798258408s: waiting for machine to come up
	I1104 10:52:32.311067   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:32.311528   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:32.311574   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:32.311491   37738 retry.go:31] will retry after 3.573429436s: waiting for machine to come up
	I1104 10:52:35.889088   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:35.889552   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:35.889578   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:35.889516   37738 retry.go:31] will retry after 4.488251667s: waiting for machine to come up
	I1104 10:52:40.382173   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382599   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has current primary IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382621   37715 main.go:141] libmachine: (ha-931571) Found IP for machine: 192.168.39.67
	I1104 10:52:40.382633   37715 main.go:141] libmachine: (ha-931571) Reserving static IP address...
	I1104 10:52:40.383033   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find host DHCP lease matching {name: "ha-931571", mac: "52:54:00:2c:cb:16", ip: "192.168.39.67"} in network mk-ha-931571
	I1104 10:52:40.452346   37715 main.go:141] libmachine: (ha-931571) DBG | Getting to WaitForSSH function...
	I1104 10:52:40.452379   37715 main.go:141] libmachine: (ha-931571) Reserved static IP address: 192.168.39.67
	I1104 10:52:40.452392   37715 main.go:141] libmachine: (ha-931571) Waiting for SSH to be available...
	I1104 10:52:40.456018   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456490   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.456515   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456627   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH client type: external
	I1104 10:52:40.456650   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa (-rw-------)
	I1104 10:52:40.456681   37715 main.go:141] libmachine: (ha-931571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:52:40.456700   37715 main.go:141] libmachine: (ha-931571) DBG | About to run SSH command:
	I1104 10:52:40.456715   37715 main.go:141] libmachine: (ha-931571) DBG | exit 0
	I1104 10:52:40.580862   37715 main.go:141] libmachine: (ha-931571) DBG | SSH cmd err, output: <nil>: 
	I1104 10:52:40.581146   37715 main.go:141] libmachine: (ha-931571) KVM machine creation complete!
	I1104 10:52:40.581410   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:40.581936   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582130   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582294   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:52:40.582307   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:52:40.583398   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:52:40.583412   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:52:40.583418   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:52:40.583425   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.585558   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585865   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.585891   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585991   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.586130   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586272   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586383   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.586519   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.586723   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.586734   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:52:40.692229   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:40.692248   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:52:40.692257   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.695010   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695388   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.695411   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695556   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.695751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.695899   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.696052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.696188   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.696868   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.696890   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:52:40.801468   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:52:40.801552   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:52:40.801563   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:52:40.801571   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801814   37715 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 10:52:40.801836   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801992   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.804318   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804694   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.804723   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804889   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.805051   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805262   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805439   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.805644   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.805826   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.805838   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 10:52:40.921516   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 10:52:40.921540   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.924174   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.924541   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924675   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.924825   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.924941   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.925052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.925210   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.925423   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.925448   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:52:41.036770   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:41.036799   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:52:41.036830   37715 buildroot.go:174] setting up certificates
	I1104 10:52:41.036839   37715 provision.go:84] configureAuth start
	I1104 10:52:41.036848   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:41.037164   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.039662   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.040032   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040164   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.042288   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042624   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.042652   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042756   37715 provision.go:143] copyHostCerts
	I1104 10:52:41.042779   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042808   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:52:41.042823   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042880   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:52:41.042955   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.042972   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:52:41.042979   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.043001   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:52:41.043042   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043058   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:52:41.043064   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043084   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:52:41.043133   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 10:52:41.275942   37715 provision.go:177] copyRemoteCerts
	I1104 10:52:41.275998   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:52:41.276018   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.278984   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279300   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.279324   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279438   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.279611   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.279754   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.279862   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.362606   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:52:41.362673   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:52:41.384103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:52:41.384170   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 10:52:41.405170   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:52:41.405259   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:52:41.426285   37715 provision.go:87] duration metric: took 389.43394ms to configureAuth
	I1104 10:52:41.426311   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:52:41.426499   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:52:41.426580   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.429219   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.429539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.429959   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430107   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430247   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.430417   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.430644   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.430666   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:52:41.649262   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:52:41.649291   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:52:41.649300   37715 main.go:141] libmachine: (ha-931571) Calling .GetURL
	I1104 10:52:41.650723   37715 main.go:141] libmachine: (ha-931571) DBG | Using libvirt version 6000000
	I1104 10:52:41.653499   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.653913   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.653943   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.654070   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:52:41.654084   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:52:41.654091   37715 client.go:171] duration metric: took 20.198612513s to LocalClient.Create
	I1104 10:52:41.654124   37715 start.go:167] duration metric: took 20.198697894s to libmachine.API.Create "ha-931571"
	I1104 10:52:41.654168   37715 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 10:52:41.654182   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:52:41.654199   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.654448   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:52:41.654477   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.656689   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.657028   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657279   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.657484   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.657648   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.657776   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.738934   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:52:41.742902   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:52:41.742925   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:52:41.742997   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:52:41.743084   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:52:41.743095   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:52:41.743212   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:52:41.752124   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:41.774335   37715 start.go:296] duration metric: took 120.149038ms for postStartSetup
	I1104 10:52:41.774411   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:41.775008   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.777422   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.777754   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.777776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.778012   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:41.778186   37715 start.go:128] duration metric: took 20.340838176s to createHost
	I1104 10:52:41.778221   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.780525   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780784   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.780805   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780933   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.781101   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781264   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781386   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.781512   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.781672   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.781683   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:52:41.885593   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717561.859087710
	
	I1104 10:52:41.885616   37715 fix.go:216] guest clock: 1730717561.859087710
	I1104 10:52:41.885624   37715 fix.go:229] Guest: 2024-11-04 10:52:41.85908771 +0000 UTC Remote: 2024-11-04 10:52:41.778208592 +0000 UTC m=+20.449726833 (delta=80.879118ms)
	I1104 10:52:41.885647   37715 fix.go:200] guest clock delta is within tolerance: 80.879118ms
	I1104 10:52:41.885653   37715 start.go:83] releasing machines lock for "ha-931571", held for 20.448400301s
	I1104 10:52:41.885675   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.885953   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.888489   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.888887   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.888909   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.889131   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889647   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889819   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889899   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:52:41.889945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.890032   37715 ssh_runner.go:195] Run: cat /version.json
	I1104 10:52:41.890047   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.892621   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893038   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893065   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893082   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893208   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893350   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.893498   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.893582   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893589   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.893613   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893793   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893936   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.894105   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.894263   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.988130   37715 ssh_runner.go:195] Run: systemctl --version
	I1104 10:52:41.993656   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:52:42.142615   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:52:42.148950   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:52:42.149023   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:52:42.163368   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:52:42.163399   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:52:42.163459   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:52:42.178011   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:52:42.190311   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:52:42.190363   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:52:42.202494   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:52:42.215234   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:52:42.322933   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:52:42.465367   37715 docker.go:233] disabling docker service ...
	I1104 10:52:42.465435   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:52:42.478799   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:52:42.490748   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:52:42.621810   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:52:42.721588   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:52:42.734181   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:52:42.750278   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:52:42.750346   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.759509   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:52:42.759569   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.768912   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.778275   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.791011   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:52:42.801155   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.810365   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.825204   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.834333   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:52:42.842438   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:52:42.842479   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:52:42.853336   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:52:42.861893   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:42.966759   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:52:43.051148   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:52:43.051245   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:52:43.055605   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:52:43.055660   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:52:43.058970   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:52:43.092206   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:52:43.092300   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.119216   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.149822   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:52:43.150920   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:43.153539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.153876   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:43.153903   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.154148   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:52:43.157775   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:43.169819   37715 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 10:52:43.169924   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:43.169983   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:43.198885   37715 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 10:52:43.198949   37715 ssh_runner.go:195] Run: which lz4
	I1104 10:52:43.202346   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1104 10:52:43.202439   37715 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 10:52:43.206081   37715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 10:52:43.206107   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 10:52:44.348916   37715 crio.go:462] duration metric: took 1.146501805s to copy over tarball
	I1104 10:52:44.348982   37715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 10:52:46.326500   37715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97746722s)
	I1104 10:52:46.326527   37715 crio.go:469] duration metric: took 1.977583171s to extract the tarball
	I1104 10:52:46.326535   37715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 10:52:46.361867   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:46.402887   37715 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 10:52:46.402909   37715 cache_images.go:84] Images are preloaded, skipping loading
	I1104 10:52:46.402919   37715 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 10:52:46.403024   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:52:46.403102   37715 ssh_runner.go:195] Run: crio config
	I1104 10:52:46.448114   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:46.448134   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:46.448143   37715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 10:52:46.448161   37715 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 10:52:46.448276   37715 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 10:52:46.448297   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:52:46.448333   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:52:46.464928   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:52:46.465022   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:52:46.465069   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:52:46.473864   37715 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 10:52:46.473931   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 10:52:46.482366   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 10:52:46.497386   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:52:46.512146   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 10:52:46.528415   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1104 10:52:46.544798   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:52:46.548212   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:46.559488   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:46.692494   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:52:46.708806   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 10:52:46.708830   37715 certs.go:194] generating shared ca certs ...
	I1104 10:52:46.708849   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.709027   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:52:46.709089   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:52:46.709102   37715 certs.go:256] generating profile certs ...
	I1104 10:52:46.709156   37715 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:52:46.709175   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt with IP's: []
	I1104 10:52:46.835505   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt ...
	I1104 10:52:46.835534   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt: {Name:mk61f73d1cdbaea56c4e3a41bf4d8a8e998c4601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835713   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key ...
	I1104 10:52:46.835728   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key: {Name:mk3a1e70b98b06ffcf80cad3978790ca4b634404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835832   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66
	I1104 10:52:46.835851   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.254]
	I1104 10:52:46.955700   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 ...
	I1104 10:52:46.955730   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66: {Name:mk7e52761b5f3a6915e1cf90cd8ace0ff40a1698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.955903   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 ...
	I1104 10:52:46.955919   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66: {Name:mk473e5ea437641c8d6be7c8c672068a3ffc879a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.956011   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:52:46.956221   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:52:46.956356   37715 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:52:46.956379   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt with IP's: []
	I1104 10:52:47.101236   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt ...
	I1104 10:52:47.101269   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt: {Name:mk407ac3d668cf899822db436da4d41618f60b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101451   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key ...
	I1104 10:52:47.101466   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key: {Name:mk67291900fae9d34a6dbb5f9ac6f9eff95090cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101560   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:52:47.101583   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:52:47.101600   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:52:47.101617   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:52:47.101636   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:52:47.101656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:52:47.101675   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:52:47.101692   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:52:47.101753   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:52:47.101799   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:52:47.101812   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:52:47.101846   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:52:47.101884   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:52:47.101916   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:52:47.101975   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:47.102014   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.102035   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.102054   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.102621   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:52:47.126053   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:52:47.148030   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:52:47.169097   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:52:47.190790   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 10:52:47.211485   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 10:52:47.233064   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:52:47.254438   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:52:47.275584   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:52:47.296496   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:52:47.316993   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:52:47.338085   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 10:52:47.352830   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:52:47.357992   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:52:47.367171   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371139   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371175   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.376056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:52:47.385217   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:52:47.394305   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398184   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398229   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.403221   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:52:47.412407   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:52:47.421725   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425673   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425724   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.430774   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:52:47.442891   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:52:47.448916   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:52:47.448963   37715 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:47.449026   37715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 10:52:47.449081   37715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 10:52:47.493313   37715 cri.go:89] found id: ""
	I1104 10:52:47.493388   37715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 10:52:47.505853   37715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 10:52:47.514358   37715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 10:52:47.522614   37715 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 10:52:47.522633   37715 kubeadm.go:157] found existing configuration files:
	
	I1104 10:52:47.522685   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 10:52:47.530458   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 10:52:47.530497   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 10:52:47.538766   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 10:52:47.546614   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 10:52:47.546656   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 10:52:47.554873   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.562800   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 10:52:47.562860   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.571095   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 10:52:47.578946   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 10:52:47.578986   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 10:52:47.587002   37715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 10:52:47.774250   37715 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 10:52:59.162857   37715 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1104 10:52:59.162909   37715 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 10:52:59.162992   37715 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 10:52:59.163126   37715 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 10:52:59.163235   37715 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1104 10:52:59.163321   37715 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 10:52:59.164884   37715 out.go:235]   - Generating certificates and keys ...
	I1104 10:52:59.164965   37715 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 10:52:59.165051   37715 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 10:52:59.165154   37715 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 10:52:59.165262   37715 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 10:52:59.165355   37715 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 10:52:59.165433   37715 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 10:52:59.165512   37715 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 10:52:59.165644   37715 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165719   37715 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 10:52:59.165854   37715 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165939   37715 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 10:52:59.166039   37715 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 10:52:59.166120   37715 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 10:52:59.166198   37715 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 10:52:59.166277   37715 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 10:52:59.166352   37715 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1104 10:52:59.166437   37715 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 10:52:59.166524   37715 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 10:52:59.166602   37715 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 10:52:59.166715   37715 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 10:52:59.166813   37715 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 10:52:59.168314   37715 out.go:235]   - Booting up control plane ...
	I1104 10:52:59.168430   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 10:52:59.168528   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 10:52:59.168619   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 10:52:59.168745   37715 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 10:52:59.168864   37715 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 10:52:59.168907   37715 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 10:52:59.169020   37715 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1104 10:52:59.169142   37715 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1104 10:52:59.169244   37715 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501850183s
	I1104 10:52:59.169346   37715 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1104 10:52:59.169435   37715 kubeadm.go:310] [api-check] The API server is healthy after 5.721436597s
	I1104 10:52:59.169568   37715 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1104 10:52:59.169699   37715 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1104 10:52:59.169786   37715 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1104 10:52:59.169979   37715 kubeadm.go:310] [mark-control-plane] Marking the node ha-931571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1104 10:52:59.170060   37715 kubeadm.go:310] [bootstrap-token] Using token: x3krps.xtycqe6w7psx61o7
	I1104 10:52:59.171278   37715 out.go:235]   - Configuring RBAC rules ...
	I1104 10:52:59.171366   37715 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1104 10:52:59.171442   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1104 10:52:59.171566   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1104 10:52:59.171689   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1104 10:52:59.171828   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1104 10:52:59.171935   37715 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1104 10:52:59.172086   37715 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1104 10:52:59.172158   37715 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1104 10:52:59.172220   37715 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1104 10:52:59.172232   37715 kubeadm.go:310] 
	I1104 10:52:59.172322   37715 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1104 10:52:59.172332   37715 kubeadm.go:310] 
	I1104 10:52:59.172461   37715 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1104 10:52:59.172471   37715 kubeadm.go:310] 
	I1104 10:52:59.172512   37715 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1104 10:52:59.172591   37715 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1104 10:52:59.172657   37715 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1104 10:52:59.172671   37715 kubeadm.go:310] 
	I1104 10:52:59.172727   37715 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1104 10:52:59.172733   37715 kubeadm.go:310] 
	I1104 10:52:59.172772   37715 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1104 10:52:59.172780   37715 kubeadm.go:310] 
	I1104 10:52:59.172823   37715 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1104 10:52:59.172919   37715 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1104 10:52:59.173013   37715 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1104 10:52:59.173027   37715 kubeadm.go:310] 
	I1104 10:52:59.173126   37715 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1104 10:52:59.173242   37715 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1104 10:52:59.173250   37715 kubeadm.go:310] 
	I1104 10:52:59.173349   37715 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173475   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 \
	I1104 10:52:59.173512   37715 kubeadm.go:310] 	--control-plane 
	I1104 10:52:59.173521   37715 kubeadm.go:310] 
	I1104 10:52:59.173615   37715 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1104 10:52:59.173622   37715 kubeadm.go:310] 
	I1104 10:52:59.173728   37715 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173851   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 
	I1104 10:52:59.173864   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:59.173870   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:59.175270   37715 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1104 10:52:59.176515   37715 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1104 10:52:59.181311   37715 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1104 10:52:59.181330   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1104 10:52:59.199374   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1104 10:52:59.595605   37715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 10:52:59.595735   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:52:59.595746   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571 minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=true
	I1104 10:52:59.607016   37715 ops.go:34] apiserver oom_adj: -16
	I1104 10:52:59.726325   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.227237   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.727360   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.226637   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.727035   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.226405   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.727470   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.227029   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.337760   37715 kubeadm.go:1113] duration metric: took 3.742086638s to wait for elevateKubeSystemPrivileges
	I1104 10:53:03.337799   37715 kubeadm.go:394] duration metric: took 15.888837987s to StartCluster
	I1104 10:53:03.337821   37715 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.337905   37715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.338737   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.338982   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1104 10:53:03.338988   37715 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:03.339014   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:53:03.339062   37715 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 10:53:03.339167   37715 addons.go:69] Setting default-storageclass=true in profile "ha-931571"
	I1104 10:53:03.339173   37715 addons.go:69] Setting storage-provisioner=true in profile "ha-931571"
	I1104 10:53:03.339185   37715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-931571"
	I1104 10:53:03.339200   37715 addons.go:234] Setting addon storage-provisioner=true in "ha-931571"
	I1104 10:53:03.339229   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:03.339239   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.339632   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339672   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.339677   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339713   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.360893   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I1104 10:53:03.360926   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1104 10:53:03.361436   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361473   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361990   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362007   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362132   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362158   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362362   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362495   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362668   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.362891   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.362932   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.365045   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.365435   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 10:53:03.365987   37715 cert_rotation.go:140] Starting client certificate rotation controller
	I1104 10:53:03.366272   37715 addons.go:234] Setting addon default-storageclass=true in "ha-931571"
	I1104 10:53:03.366318   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.366699   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.366738   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.381218   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I1104 10:53:03.381322   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I1104 10:53:03.381713   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.381719   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.382205   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382227   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382357   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382372   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382534   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383016   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.383048   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.383535   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383708   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.385592   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.387622   37715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 10:53:03.388963   37715 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.388985   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 10:53:03.389004   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.392017   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392435   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.392480   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392570   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.392752   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.392874   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.393020   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.398269   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I1104 10:53:03.398748   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.399262   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.399294   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.399614   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.399786   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.401287   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.401486   37715 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.401502   37715 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 10:53:03.401529   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.404218   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404573   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.404595   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404677   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.404848   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.404981   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.405135   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.489842   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1104 10:53:03.554612   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.583845   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.952361   37715 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1104 10:53:03.952436   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952460   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952742   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952762   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.952762   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:03.952772   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952781   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952966   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952981   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.953045   37715 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1104 10:53:03.953065   37715 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1104 10:53:03.953164   37715 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1104 10:53:03.953175   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.953187   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.953195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.960797   37715 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1104 10:53:03.961342   37715 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1104 10:53:03.961355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.961363   37715 round_trippers.go:473]     Content-Type: application/json
	I1104 10:53:03.961367   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.961369   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.963493   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:03.963694   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.963715   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.964004   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.964021   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.964021   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.222705   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.222735   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223063   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.223090   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223120   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.223137   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.223149   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223361   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223375   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.225261   37715 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1104 10:53:04.226730   37715 addons.go:510] duration metric: took 887.697522ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1104 10:53:04.226762   37715 start.go:246] waiting for cluster config update ...
	I1104 10:53:04.226778   37715 start.go:255] writing updated cluster config ...
	I1104 10:53:04.228532   37715 out.go:201] 
	I1104 10:53:04.229911   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:04.229982   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.231623   37715 out.go:177] * Starting "ha-931571-m02" control-plane node in "ha-931571" cluster
	I1104 10:53:04.233345   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:53:04.233368   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:53:04.233465   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:53:04.233476   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:53:04.233547   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.233880   37715 start.go:360] acquireMachinesLock for ha-931571-m02: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:53:04.233922   37715 start.go:364] duration metric: took 22.549µs to acquireMachinesLock for "ha-931571-m02"
	I1104 10:53:04.233935   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:04.234001   37715 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1104 10:53:04.235719   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:53:04.235815   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:04.235858   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:04.250864   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I1104 10:53:04.251327   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:04.251891   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:04.251920   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:04.252265   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:04.252475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:04.252609   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:04.252797   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:53:04.252829   37715 client.go:168] LocalClient.Create starting
	I1104 10:53:04.252866   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:53:04.252907   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.252928   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.252995   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:53:04.253023   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.253038   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.253066   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:53:04.253077   37715 main.go:141] libmachine: (ha-931571-m02) Calling .PreCreateCheck
	I1104 10:53:04.253220   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:04.253654   37715 main.go:141] libmachine: Creating machine...
	I1104 10:53:04.253672   37715 main.go:141] libmachine: (ha-931571-m02) Calling .Create
	I1104 10:53:04.253800   37715 main.go:141] libmachine: (ha-931571-m02) Creating KVM machine...
	I1104 10:53:04.254992   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing default KVM network
	I1104 10:53:04.255150   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing private KVM network mk-ha-931571
	I1104 10:53:04.255299   37715 main.go:141] libmachine: (ha-931571-m02) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.255322   37715 main.go:141] libmachine: (ha-931571-m02) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:53:04.255385   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.255280   38069 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.255479   37715 main.go:141] libmachine: (ha-931571-m02) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:53:04.500647   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.500534   38069 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa...
	I1104 10:53:04.797066   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.796939   38069 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk...
	I1104 10:53:04.797094   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing magic tar header
	I1104 10:53:04.797104   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing SSH key tar header
	I1104 10:53:04.797111   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.797059   38069 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.797220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02
	I1104 10:53:04.797261   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 (perms=drwx------)
	I1104 10:53:04.797271   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:53:04.797289   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.797298   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:53:04.797310   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:53:04.797318   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:53:04.797331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home
	I1104 10:53:04.797349   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:53:04.797357   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Skipping /home - not owner
	I1104 10:53:04.797376   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:53:04.797389   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:53:04.797401   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:53:04.797412   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:53:04.797440   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:04.798407   37715 main.go:141] libmachine: (ha-931571-m02) define libvirt domain using xml: 
	I1104 10:53:04.798425   37715 main.go:141] libmachine: (ha-931571-m02) <domain type='kvm'>
	I1104 10:53:04.798436   37715 main.go:141] libmachine: (ha-931571-m02)   <name>ha-931571-m02</name>
	I1104 10:53:04.798449   37715 main.go:141] libmachine: (ha-931571-m02)   <memory unit='MiB'>2200</memory>
	I1104 10:53:04.798465   37715 main.go:141] libmachine: (ha-931571-m02)   <vcpu>2</vcpu>
	I1104 10:53:04.798472   37715 main.go:141] libmachine: (ha-931571-m02)   <features>
	I1104 10:53:04.798477   37715 main.go:141] libmachine: (ha-931571-m02)     <acpi/>
	I1104 10:53:04.798481   37715 main.go:141] libmachine: (ha-931571-m02)     <apic/>
	I1104 10:53:04.798486   37715 main.go:141] libmachine: (ha-931571-m02)     <pae/>
	I1104 10:53:04.798492   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798498   37715 main.go:141] libmachine: (ha-931571-m02)   </features>
	I1104 10:53:04.798502   37715 main.go:141] libmachine: (ha-931571-m02)   <cpu mode='host-passthrough'>
	I1104 10:53:04.798507   37715 main.go:141] libmachine: (ha-931571-m02)   
	I1104 10:53:04.798512   37715 main.go:141] libmachine: (ha-931571-m02)   </cpu>
	I1104 10:53:04.798522   37715 main.go:141] libmachine: (ha-931571-m02)   <os>
	I1104 10:53:04.798534   37715 main.go:141] libmachine: (ha-931571-m02)     <type>hvm</type>
	I1104 10:53:04.798546   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='cdrom'/>
	I1104 10:53:04.798552   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='hd'/>
	I1104 10:53:04.798564   37715 main.go:141] libmachine: (ha-931571-m02)     <bootmenu enable='no'/>
	I1104 10:53:04.798571   37715 main.go:141] libmachine: (ha-931571-m02)   </os>
	I1104 10:53:04.798580   37715 main.go:141] libmachine: (ha-931571-m02)   <devices>
	I1104 10:53:04.798585   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='cdrom'>
	I1104 10:53:04.798596   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/boot2docker.iso'/>
	I1104 10:53:04.798601   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hdc' bus='scsi'/>
	I1104 10:53:04.798630   37715 main.go:141] libmachine: (ha-931571-m02)       <readonly/>
	I1104 10:53:04.798653   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798678   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='disk'>
	I1104 10:53:04.798702   37715 main.go:141] libmachine: (ha-931571-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:53:04.798718   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk'/>
	I1104 10:53:04.798732   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hda' bus='virtio'/>
	I1104 10:53:04.798747   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798763   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798783   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='mk-ha-931571'/>
	I1104 10:53:04.798799   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798811   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798822   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798835   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='default'/>
	I1104 10:53:04.798846   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798858   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798868   37715 main.go:141] libmachine: (ha-931571-m02)     <serial type='pty'>
	I1104 10:53:04.798881   37715 main.go:141] libmachine: (ha-931571-m02)       <target port='0'/>
	I1104 10:53:04.798892   37715 main.go:141] libmachine: (ha-931571-m02)     </serial>
	I1104 10:53:04.798901   37715 main.go:141] libmachine: (ha-931571-m02)     <console type='pty'>
	I1104 10:53:04.798910   37715 main.go:141] libmachine: (ha-931571-m02)       <target type='serial' port='0'/>
	I1104 10:53:04.798916   37715 main.go:141] libmachine: (ha-931571-m02)     </console>
	I1104 10:53:04.798925   37715 main.go:141] libmachine: (ha-931571-m02)     <rng model='virtio'>
	I1104 10:53:04.798938   37715 main.go:141] libmachine: (ha-931571-m02)       <backend model='random'>/dev/random</backend>
	I1104 10:53:04.798948   37715 main.go:141] libmachine: (ha-931571-m02)     </rng>
	I1104 10:53:04.798958   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798967   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798977   37715 main.go:141] libmachine: (ha-931571-m02)   </devices>
	I1104 10:53:04.798990   37715 main.go:141] libmachine: (ha-931571-m02) </domain>
	I1104 10:53:04.799001   37715 main.go:141] libmachine: (ha-931571-m02) 
	I1104 10:53:04.805977   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5e:b4:47 in network default
	I1104 10:53:04.806519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:04.806536   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring networks are active...
	I1104 10:53:04.807291   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network default is active
	I1104 10:53:04.807614   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network mk-ha-931571 is active
	I1104 10:53:04.807998   37715 main.go:141] libmachine: (ha-931571-m02) Getting domain xml...
	I1104 10:53:04.808751   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:06.037689   37715 main.go:141] libmachine: (ha-931571-m02) Waiting to get IP...
	I1104 10:53:06.038416   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.038827   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.038856   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.038804   38069 retry.go:31] will retry after 244.727015ms: waiting for machine to come up
	I1104 10:53:06.285395   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.285853   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.285879   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.285815   38069 retry.go:31] will retry after 291.944786ms: waiting for machine to come up
	I1104 10:53:06.579413   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.579939   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.579964   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.579896   38069 retry.go:31] will retry after 446.911163ms: waiting for machine to come up
	I1104 10:53:07.028452   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.028838   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.028870   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.028792   38069 retry.go:31] will retry after 472.390697ms: waiting for machine to come up
	I1104 10:53:07.502204   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.502568   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.502592   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.502526   38069 retry.go:31] will retry after 662.15145ms: waiting for machine to come up
	I1104 10:53:08.166152   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:08.166583   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:08.166609   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:08.166538   38069 retry.go:31] will retry after 886.374206ms: waiting for machine to come up
	I1104 10:53:09.054240   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:09.054689   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:09.054715   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:09.054670   38069 retry.go:31] will retry after 963.475989ms: waiting for machine to come up
	I1104 10:53:10.020142   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:10.020587   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:10.020630   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:10.020571   38069 retry.go:31] will retry after 1.332433034s: waiting for machine to come up
	I1104 10:53:11.354908   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:11.355309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:11.355331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:11.355273   38069 retry.go:31] will retry after 1.652203867s: waiting for machine to come up
	I1104 10:53:13.009876   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:13.010297   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:13.010319   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:13.010254   38069 retry.go:31] will retry after 2.320402176s: waiting for machine to come up
	I1104 10:53:15.332045   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:15.332414   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:15.332441   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:15.332356   38069 retry.go:31] will retry after 2.652871808s: waiting for machine to come up
	I1104 10:53:17.987774   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:17.988211   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:17.988231   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:17.988174   38069 retry.go:31] will retry after 3.518414185s: waiting for machine to come up
	I1104 10:53:21.508515   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:21.508901   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:21.508926   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:21.508866   38069 retry.go:31] will retry after 4.345855832s: waiting for machine to come up
	I1104 10:53:25.856753   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857143   37715 main.go:141] libmachine: (ha-931571-m02) Found IP for machine: 192.168.39.245
	I1104 10:53:25.857167   37715 main.go:141] libmachine: (ha-931571-m02) Reserving static IP address...
	I1104 10:53:25.857181   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857621   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find host DHCP lease matching {name: "ha-931571-m02", mac: "52:54:00:5c:86:6b", ip: "192.168.39.245"} in network mk-ha-931571
	I1104 10:53:25.931250   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Getting to WaitForSSH function...
	I1104 10:53:25.931278   37715 main.go:141] libmachine: (ha-931571-m02) Reserved static IP address: 192.168.39.245
	I1104 10:53:25.931296   37715 main.go:141] libmachine: (ha-931571-m02) Waiting for SSH to be available...
	I1104 10:53:25.933968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934431   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:25.934489   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934562   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH client type: external
	I1104 10:53:25.934591   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa (-rw-------)
	I1104 10:53:25.934652   37715 main.go:141] libmachine: (ha-931571-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:53:25.934674   37715 main.go:141] libmachine: (ha-931571-m02) DBG | About to run SSH command:
	I1104 10:53:25.934692   37715 main.go:141] libmachine: (ha-931571-m02) DBG | exit 0
	I1104 10:53:26.068913   37715 main.go:141] libmachine: (ha-931571-m02) DBG | SSH cmd err, output: <nil>: 
	I1104 10:53:26.069182   37715 main.go:141] libmachine: (ha-931571-m02) KVM machine creation complete!
	I1104 10:53:26.069569   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:26.070061   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070245   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070421   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:53:26.070438   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 10:53:26.071961   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:53:26.071975   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:53:26.071980   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:53:26.071985   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.074060   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074383   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.074403   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074574   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.074737   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074878   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074976   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.075126   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.075361   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.075377   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:53:26.184350   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.184379   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:53:26.184395   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.186866   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187176   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.187196   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187362   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.187546   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187699   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187825   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.187985   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.188193   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.188204   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:53:26.301614   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:53:26.301685   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:53:26.301699   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:53:26.301711   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.301942   37715 buildroot.go:166] provisioning hostname "ha-931571-m02"
	I1104 10:53:26.301964   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.302139   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.304767   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.305334   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305470   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.305626   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305790   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305931   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.306093   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.306297   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.306310   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m02 && echo "ha-931571-m02" | sudo tee /etc/hostname
	I1104 10:53:26.430814   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m02
	
	I1104 10:53:26.430842   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.433622   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.433925   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.433953   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.434109   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.434330   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434473   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434584   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.434716   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.434907   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.434931   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:53:26.553495   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.553519   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:53:26.553534   37715 buildroot.go:174] setting up certificates
	I1104 10:53:26.553543   37715 provision.go:84] configureAuth start
	I1104 10:53:26.553551   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.553773   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:26.556203   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556500   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.556519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556610   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.558806   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559168   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.559194   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559467   37715 provision.go:143] copyHostCerts
	I1104 10:53:26.559496   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559535   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:53:26.559546   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559623   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:53:26.559707   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559732   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:53:26.559741   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559778   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:53:26.559830   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559853   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:53:26.559865   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559899   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:53:26.559968   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m02 san=[127.0.0.1 192.168.39.245 ha-931571-m02 localhost minikube]
	I1104 10:53:26.827173   37715 provision.go:177] copyRemoteCerts
	I1104 10:53:26.827226   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:53:26.827248   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.829975   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830343   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.830372   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830576   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.830763   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.830912   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.831022   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:26.923318   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:53:26.923390   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:53:26.950708   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:53:26.950773   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:53:26.976975   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:53:26.977045   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 10:53:27.002230   37715 provision.go:87] duration metric: took 448.676469ms to configureAuth
	I1104 10:53:27.002252   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:53:27.002404   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:27.002475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.005273   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005618   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.005646   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005772   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.005978   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006123   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006279   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.006465   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.006627   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.006641   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:53:27.235271   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:53:27.235297   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:53:27.235305   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetURL
	I1104 10:53:27.236550   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using libvirt version 6000000
	I1104 10:53:27.238826   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239189   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.239220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239401   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:53:27.239418   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:53:27.239426   37715 client.go:171] duration metric: took 22.986586779s to LocalClient.Create
	I1104 10:53:27.239451   37715 start.go:167] duration metric: took 22.986656312s to libmachine.API.Create "ha-931571"
	I1104 10:53:27.239472   37715 start.go:293] postStartSetup for "ha-931571-m02" (driver="kvm2")
	I1104 10:53:27.239488   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:53:27.239510   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.239721   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:53:27.239747   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.241968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242332   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.242352   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242491   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.242658   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.242769   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.242872   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.327061   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:53:27.331021   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:53:27.331050   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:53:27.331133   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:53:27.331207   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:53:27.331218   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:53:27.331300   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:53:27.341280   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:27.363737   37715 start.go:296] duration metric: took 124.248011ms for postStartSetup
	I1104 10:53:27.363783   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:27.364431   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.367195   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367660   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.367698   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367926   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:27.368121   37715 start.go:128] duration metric: took 23.134111471s to createHost
	I1104 10:53:27.368147   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.370510   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.370846   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.370881   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.371043   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.371226   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371432   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371573   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.371728   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.371899   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.371912   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:53:27.485557   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717607.449108710
	
	I1104 10:53:27.485578   37715 fix.go:216] guest clock: 1730717607.449108710
	I1104 10:53:27.485585   37715 fix.go:229] Guest: 2024-11-04 10:53:27.44910871 +0000 UTC Remote: 2024-11-04 10:53:27.368133628 +0000 UTC m=+66.039651871 (delta=80.975082ms)
	I1104 10:53:27.485600   37715 fix.go:200] guest clock delta is within tolerance: 80.975082ms
	I1104 10:53:27.485605   37715 start.go:83] releasing machines lock for "ha-931571-m02", held for 23.251676872s
	I1104 10:53:27.485620   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.485857   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.488648   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.489014   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.489041   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.491305   37715 out.go:177] * Found network options:
	I1104 10:53:27.492602   37715 out.go:177]   - NO_PROXY=192.168.39.67
	W1104 10:53:27.493715   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.493752   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494253   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494447   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494556   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:53:27.494595   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	W1104 10:53:27.494597   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.494657   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:53:27.494679   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.497460   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497637   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497850   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.497871   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497991   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.498003   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.498025   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498232   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498254   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498403   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498437   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498538   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.498550   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498773   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.735755   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:53:27.742047   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:53:27.742118   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:53:27.757546   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:53:27.757568   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:53:27.757654   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:53:27.775341   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:53:27.789267   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:53:27.789322   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:53:27.802395   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:53:27.815846   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:53:27.932464   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:53:28.072054   37715 docker.go:233] disabling docker service ...
	I1104 10:53:28.072113   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:53:28.085955   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:53:28.098515   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:53:28.231393   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:53:28.348075   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:53:28.360668   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:53:28.377621   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:53:28.377680   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.387614   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:53:28.387678   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.397527   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.406950   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.416691   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:53:28.426696   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.436536   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.452706   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.462377   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:53:28.471479   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:53:28.471541   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:53:28.484536   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:53:28.493914   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:28.602971   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:53:28.692433   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:53:28.692522   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:53:28.696783   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:53:28.696822   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:53:28.700013   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:53:28.734056   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:53:28.734128   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.760475   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.789783   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:53:28.791233   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:53:28.792582   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:28.795120   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795494   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:28.795520   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795759   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:53:28.799797   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:28.811896   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:53:28.812115   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:28.812360   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.812401   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.826717   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I1104 10:53:28.827181   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.827674   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.827693   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.828004   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.828173   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:28.829698   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:28.829978   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.830013   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.844302   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I1104 10:53:28.844715   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.845157   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.845180   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.845561   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.845729   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:28.845886   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.245
	I1104 10:53:28.845896   37715 certs.go:194] generating shared ca certs ...
	I1104 10:53:28.845908   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.846013   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:53:28.846050   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:53:28.846056   37715 certs.go:256] generating profile certs ...
	I1104 10:53:28.846117   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:53:28.846138   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a
	I1104 10:53:28.846149   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.254]
	I1104 10:53:28.973533   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a ...
	I1104 10:53:28.973558   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a: {Name:mk251fe01c9791f2c1df00673ac1979d7532e3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973716   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a ...
	I1104 10:53:28.973729   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a: {Name:mkef3dc2affbfe3d37549d8d043a12581b7267b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973806   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:53:28.973935   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:53:28.974053   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:53:28.974067   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:53:28.974079   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:53:28.974092   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:53:28.974103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:53:28.974114   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:53:28.974127   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:53:28.974139   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:53:28.974151   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:53:28.974191   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:53:28.974219   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:53:28.974228   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:53:28.974249   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:53:28.974273   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:53:28.974294   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:53:28.974329   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:28.974353   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:53:28.974366   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:53:28.974379   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:28.974408   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:28.977338   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977742   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:28.977776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:28.978138   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:28.978269   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:28.978403   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:29.049594   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:53:29.054655   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:53:29.065445   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:53:29.070822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:53:29.082304   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:53:29.086563   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:53:29.098922   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:53:29.103085   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:53:29.113035   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:53:29.117456   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:53:29.127764   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:53:29.131629   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:53:29.143522   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:53:29.167376   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:53:29.189625   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:53:29.212768   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:53:29.235967   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1104 10:53:29.263247   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:53:29.285302   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:53:29.306703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:53:29.328748   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:53:29.350648   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:53:29.372264   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:53:29.395406   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:53:29.410777   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:53:29.427042   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:53:29.443978   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:53:29.460125   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:53:29.475628   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:53:29.491185   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:53:29.507040   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:53:29.512376   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:53:29.522746   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526894   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526950   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.532557   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:53:29.543248   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:53:29.553302   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557429   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557475   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.562752   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:53:29.573585   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:53:29.583479   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587879   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587928   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.594267   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:53:29.605746   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:53:29.609628   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:53:29.609689   37715 kubeadm.go:934] updating node {m02 192.168.39.245 8443 v1.31.2 crio true true} ...
	I1104 10:53:29.609774   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:53:29.609799   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:53:29.609830   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:53:29.626833   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:53:29.626905   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:53:29.626952   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.636985   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:53:29.637050   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.646235   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:53:29.646266   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.646297   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1104 10:53:29.646318   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1104 10:53:29.646321   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.650548   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:53:29.650575   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:53:30.395926   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.396007   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.400715   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:53:30.400746   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:53:30.426541   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:53:30.447212   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.447328   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.458650   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:53:30.458689   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:53:30.919365   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:53:30.928897   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1104 10:53:30.946677   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:53:30.963726   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:53:30.981653   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:53:30.985571   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:30.998898   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:31.132385   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:31.149804   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:31.150291   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:31.150345   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:31.165094   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I1104 10:53:31.165587   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:31.166163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:31.166186   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:31.166555   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:31.166779   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:31.166958   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:53:31.167051   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:53:31.167067   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:31.169771   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170152   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:31.170182   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170376   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:31.170562   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:31.170687   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:31.170781   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:31.306325   37715 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:31.306377   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I1104 10:53:52.004440   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (20.698039868s)
	I1104 10:53:52.004481   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:53:52.565954   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m02 minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:53:52.722802   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:53:52.847701   37715 start.go:319] duration metric: took 21.680738209s to joinCluster
	I1104 10:53:52.847788   37715 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:52.848131   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:52.849508   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:53:52.850857   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:53.114403   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:53.138620   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:53.138881   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:53:53.138942   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:53:53.139141   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:53:53.139247   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.139257   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.139269   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.139278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.152136   37715 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1104 10:53:53.639369   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.639392   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.639401   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.639405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.643203   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:54.140047   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.140070   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.140084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.147092   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:53:54.639335   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.639355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.639363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.639367   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.642506   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.140245   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.140265   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.140273   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.140277   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.143824   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.144458   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:55.639804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.639830   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.639841   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.639846   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.643096   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:56.140054   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.140078   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.140095   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.142960   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:56.639891   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.639912   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.639923   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.639928   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.642755   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.139690   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.139713   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.139725   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.139730   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.143324   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:57.639441   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.639460   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.639469   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.639473   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.642433   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.642947   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:58.140368   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.140388   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.140399   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.140404   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.144117   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:58.640193   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.640215   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.640223   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.640227   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.643667   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.139304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.139323   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.139331   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.139335   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.142878   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.639323   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.639344   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.639353   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.639357   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.642391   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.140288   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.140323   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.140328   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.143357   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.143948   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:00.639324   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.639348   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.639358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.639365   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.643179   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.140315   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.140337   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.140345   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.140349   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.143491   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.639485   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.639510   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.639517   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.639522   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.642450   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:02.140259   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.140291   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.140299   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.140304   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.143695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:02.144128   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:02.639414   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.639433   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.639442   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.639447   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.642409   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.140294   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.140327   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.140333   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.143301   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.639404   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.639426   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.639437   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.639445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.642367   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.139716   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.139740   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.139750   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.139754   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.143000   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:04.640219   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.640245   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.640256   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.640262   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.643232   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.643667   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:05.140138   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.140162   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.140173   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.140178   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.142993   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:05.639755   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.639775   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.639783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.639802   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.643475   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.139372   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.139394   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.139402   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.139405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.142509   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.639413   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.639442   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.639451   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.639456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.642592   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.139655   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.139684   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.139694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.139699   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.143170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.143728   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:07.640208   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.640228   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.640235   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.640240   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.643154   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.140228   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.140261   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.140273   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.140278   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.142997   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.639828   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.639854   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.639862   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.639866   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.643244   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.140126   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.140153   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.140166   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.140172   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.143278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.143950   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:09.639588   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.639610   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.639618   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.639623   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.642343   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.139875   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.139898   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.139905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.139909   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.143037   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.640013   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.640033   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.640042   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.640045   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.643833   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.644423   37715 node_ready.go:49] node "ha-931571-m02" has status "Ready":"True"
	I1104 10:54:10.644446   37715 node_ready.go:38] duration metric: took 17.505281339s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:54:10.644459   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:10.644564   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:10.644577   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.644587   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.644591   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.649476   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:10.656031   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.656110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:54:10.656129   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.656138   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.656144   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.659282   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.659928   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.659944   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.659953   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.659958   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.662844   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.663378   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.663402   37715 pod_ready.go:82] duration metric: took 7.344091ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663423   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663492   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:54:10.663502   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.663512   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.663521   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.666287   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.666934   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.666950   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.666957   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.666960   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.669169   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.669739   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.669760   37715 pod_ready.go:82] duration metric: took 6.3295ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669770   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669830   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:54:10.669842   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.669852   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.669859   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.672042   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.672626   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.672642   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.672650   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.672653   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.674766   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.675295   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.675317   37715 pod_ready.go:82] duration metric: took 5.539368ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675329   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675390   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:54:10.675398   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.675405   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.675410   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.677591   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.678184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.678197   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.678204   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.678208   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.680155   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:54:10.680700   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.680721   37715 pod_ready.go:82] duration metric: took 5.381074ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.680737   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.840055   37715 request.go:632] Waited for 159.25235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840140   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840150   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.840160   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.840171   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.843356   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.040534   37715 request.go:632] Waited for 196.430173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040604   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040615   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.040623   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.040630   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.043768   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.044382   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.044403   37715 pod_ready.go:82] duration metric: took 363.65714ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.044412   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.240746   37715 request.go:632] Waited for 196.265081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240800   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240805   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.240812   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.240823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.244055   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.441020   37715 request.go:632] Waited for 196.31895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441076   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441082   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.441089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.441092   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.443940   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.444396   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.444417   37715 pod_ready.go:82] duration metric: took 399.997294ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.444431   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.640978   37715 request.go:632] Waited for 196.455451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641045   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641052   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.641063   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.641068   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.644104   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.840124   37715 request.go:632] Waited for 195.279381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840180   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.840189   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.840204   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.843139   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.843784   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.843806   37715 pod_ready.go:82] duration metric: took 399.367004ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.843816   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.040826   37715 request.go:632] Waited for 196.934959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040888   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040896   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.040905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.040912   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.044321   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.240220   37715 request.go:632] Waited for 195.323321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240302   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.240311   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.240340   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.243972   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.244423   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.244441   37715 pod_ready.go:82] duration metric: took 400.61624ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.244452   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.440627   37715 request.go:632] Waited for 196.096769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440687   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440692   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.440700   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.440704   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.443759   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.640675   37715 request.go:632] Waited for 196.368451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640746   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640753   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.640764   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.640771   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.645533   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:12.646078   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.646098   37715 pod_ready.go:82] duration metric: took 401.639494ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.646111   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.840342   37715 request.go:632] Waited for 194.16235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840395   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840400   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.840407   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.840413   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.844505   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:13.040627   37715 request.go:632] Waited for 195.405277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040697   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040706   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.040713   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.040717   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.043654   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.044440   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.044461   37715 pod_ready.go:82] duration metric: took 398.343689ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.044472   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.240500   37715 request.go:632] Waited for 195.966375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240580   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240589   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.240599   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.240606   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.243607   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.440419   37715 request.go:632] Waited for 196.059783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440489   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440495   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.440502   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.440507   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.443953   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.444535   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.444560   37715 pod_ready.go:82] duration metric: took 400.080635ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.444575   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.640646   37715 request.go:632] Waited for 195.95641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640702   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640707   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.640716   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.640720   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.644170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.840111   37715 request.go:632] Waited for 195.309512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840189   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.840197   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.840205   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.843622   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.844295   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.844319   37715 pod_ready.go:82] duration metric: took 399.734957ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.844333   37715 pod_ready.go:39] duration metric: took 3.199846594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:13.844350   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:54:13.844417   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:54:13.858847   37715 api_server.go:72] duration metric: took 21.011018077s to wait for apiserver process to appear ...
	I1104 10:54:13.858869   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:54:13.858890   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:54:13.863051   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:54:13.863110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:54:13.863115   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.863122   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.863126   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.864098   37715 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1104 10:54:13.864181   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:54:13.864195   37715 api_server.go:131] duration metric: took 5.319439ms to wait for apiserver health ...
	I1104 10:54:13.864202   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:54:14.040623   37715 request.go:632] Waited for 176.353381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040696   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040702   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.040709   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.040714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.045262   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.050254   37715 system_pods.go:59] 17 kube-system pods found
	I1104 10:54:14.050280   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.050285   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.050289   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.050292   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.050296   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.050301   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.050305   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.050310   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.050315   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.050320   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.050327   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.050332   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.050340   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.050345   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.050354   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050364   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050370   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.050377   37715 system_pods.go:74] duration metric: took 186.169669ms to wait for pod list to return data ...
	I1104 10:54:14.050387   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:54:14.240854   37715 request.go:632] Waited for 190.370277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240922   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240929   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.240940   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.240963   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.244687   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.244932   37715 default_sa.go:45] found service account: "default"
	I1104 10:54:14.244952   37715 default_sa.go:55] duration metric: took 194.560071ms for default service account to be created ...
	I1104 10:54:14.244961   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:54:14.440692   37715 request.go:632] Waited for 195.67345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440751   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440757   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.440772   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.440780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.444830   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.449745   37715 system_pods.go:86] 17 kube-system pods found
	I1104 10:54:14.449772   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.449778   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.449783   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.449789   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.449795   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.449800   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.449807   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.449812   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.449816   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.449821   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.449826   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.449834   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.449839   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.449848   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.449857   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449870   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449878   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.449891   37715 system_pods.go:126] duration metric: took 204.923702ms to wait for k8s-apps to be running ...
	I1104 10:54:14.449903   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:54:14.449956   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:14.464950   37715 system_svc.go:56] duration metric: took 15.038755ms WaitForService to wait for kubelet
	I1104 10:54:14.464983   37715 kubeadm.go:582] duration metric: took 21.617159665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:54:14.465005   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:54:14.640444   37715 request.go:632] Waited for 175.359531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640495   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640507   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.640514   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.640531   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.644308   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.645138   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645162   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645172   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645175   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645180   37715 node_conditions.go:105] duration metric: took 180.169842ms to run NodePressure ...
	I1104 10:54:14.645191   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:54:14.645220   37715 start.go:255] writing updated cluster config ...
	I1104 10:54:14.647434   37715 out.go:201] 
	I1104 10:54:14.649030   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:14.649124   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.650881   37715 out.go:177] * Starting "ha-931571-m03" control-plane node in "ha-931571" cluster
	I1104 10:54:14.652021   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:54:14.652041   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:54:14.652128   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:54:14.652138   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:54:14.652229   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.652384   37715 start.go:360] acquireMachinesLock for ha-931571-m03: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:54:14.652421   37715 start.go:364] duration metric: took 20.345µs to acquireMachinesLock for "ha-931571-m03"
	I1104 10:54:14.652439   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:14.652552   37715 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1104 10:54:14.653932   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:54:14.654009   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:14.654042   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:14.669012   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1104 10:54:14.669516   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:14.669968   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:14.669986   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:14.670370   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:14.670550   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:14.670697   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:14.670887   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:54:14.670919   37715 client.go:168] LocalClient.Create starting
	I1104 10:54:14.670952   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:54:14.670990   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671004   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671047   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:54:14.671066   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671074   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671092   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:54:14.671100   37715 main.go:141] libmachine: (ha-931571-m03) Calling .PreCreateCheck
	I1104 10:54:14.671295   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:14.671735   37715 main.go:141] libmachine: Creating machine...
	I1104 10:54:14.671748   37715 main.go:141] libmachine: (ha-931571-m03) Calling .Create
	I1104 10:54:14.671896   37715 main.go:141] libmachine: (ha-931571-m03) Creating KVM machine...
	I1104 10:54:14.673127   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing default KVM network
	I1104 10:54:14.673275   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing private KVM network mk-ha-931571
	I1104 10:54:14.673433   37715 main.go:141] libmachine: (ha-931571-m03) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:14.673458   37715 main.go:141] libmachine: (ha-931571-m03) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:54:14.673532   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.673413   38465 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:14.673618   37715 main.go:141] libmachine: (ha-931571-m03) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:54:14.913416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.913288   38465 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa...
	I1104 10:54:15.078787   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078642   38465 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk...
	I1104 10:54:15.078832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing magic tar header
	I1104 10:54:15.078845   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing SSH key tar header
	I1104 10:54:15.078858   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078756   38465 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:15.078874   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03
	I1104 10:54:15.078881   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:54:15.078888   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 (perms=drwx------)
	I1104 10:54:15.078896   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:54:15.078902   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:54:15.078911   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:54:15.078919   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:54:15.078931   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:54:15.078951   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:15.078968   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:15.078978   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:54:15.078985   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:54:15.078991   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:54:15.078997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home
	I1104 10:54:15.079003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Skipping /home - not owner
	I1104 10:54:15.079942   37715 main.go:141] libmachine: (ha-931571-m03) define libvirt domain using xml: 
	I1104 10:54:15.079975   37715 main.go:141] libmachine: (ha-931571-m03) <domain type='kvm'>
	I1104 10:54:15.079986   37715 main.go:141] libmachine: (ha-931571-m03)   <name>ha-931571-m03</name>
	I1104 10:54:15.079997   37715 main.go:141] libmachine: (ha-931571-m03)   <memory unit='MiB'>2200</memory>
	I1104 10:54:15.080003   37715 main.go:141] libmachine: (ha-931571-m03)   <vcpu>2</vcpu>
	I1104 10:54:15.080007   37715 main.go:141] libmachine: (ha-931571-m03)   <features>
	I1104 10:54:15.080011   37715 main.go:141] libmachine: (ha-931571-m03)     <acpi/>
	I1104 10:54:15.080015   37715 main.go:141] libmachine: (ha-931571-m03)     <apic/>
	I1104 10:54:15.080020   37715 main.go:141] libmachine: (ha-931571-m03)     <pae/>
	I1104 10:54:15.080024   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080028   37715 main.go:141] libmachine: (ha-931571-m03)   </features>
	I1104 10:54:15.080032   37715 main.go:141] libmachine: (ha-931571-m03)   <cpu mode='host-passthrough'>
	I1104 10:54:15.080037   37715 main.go:141] libmachine: (ha-931571-m03)   
	I1104 10:54:15.080040   37715 main.go:141] libmachine: (ha-931571-m03)   </cpu>
	I1104 10:54:15.080045   37715 main.go:141] libmachine: (ha-931571-m03)   <os>
	I1104 10:54:15.080049   37715 main.go:141] libmachine: (ha-931571-m03)     <type>hvm</type>
	I1104 10:54:15.080054   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='cdrom'/>
	I1104 10:54:15.080061   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='hd'/>
	I1104 10:54:15.080066   37715 main.go:141] libmachine: (ha-931571-m03)     <bootmenu enable='no'/>
	I1104 10:54:15.080070   37715 main.go:141] libmachine: (ha-931571-m03)   </os>
	I1104 10:54:15.080075   37715 main.go:141] libmachine: (ha-931571-m03)   <devices>
	I1104 10:54:15.080079   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='cdrom'>
	I1104 10:54:15.080088   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/boot2docker.iso'/>
	I1104 10:54:15.080096   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hdc' bus='scsi'/>
	I1104 10:54:15.080101   37715 main.go:141] libmachine: (ha-931571-m03)       <readonly/>
	I1104 10:54:15.080106   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080111   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='disk'>
	I1104 10:54:15.080119   37715 main.go:141] libmachine: (ha-931571-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:54:15.080127   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk'/>
	I1104 10:54:15.080134   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hda' bus='virtio'/>
	I1104 10:54:15.080145   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080149   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080154   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='mk-ha-931571'/>
	I1104 10:54:15.080163   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080168   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080172   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080177   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='default'/>
	I1104 10:54:15.080181   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080186   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080191   37715 main.go:141] libmachine: (ha-931571-m03)     <serial type='pty'>
	I1104 10:54:15.080196   37715 main.go:141] libmachine: (ha-931571-m03)       <target port='0'/>
	I1104 10:54:15.080200   37715 main.go:141] libmachine: (ha-931571-m03)     </serial>
	I1104 10:54:15.080205   37715 main.go:141] libmachine: (ha-931571-m03)     <console type='pty'>
	I1104 10:54:15.080209   37715 main.go:141] libmachine: (ha-931571-m03)       <target type='serial' port='0'/>
	I1104 10:54:15.080214   37715 main.go:141] libmachine: (ha-931571-m03)     </console>
	I1104 10:54:15.080218   37715 main.go:141] libmachine: (ha-931571-m03)     <rng model='virtio'>
	I1104 10:54:15.080224   37715 main.go:141] libmachine: (ha-931571-m03)       <backend model='random'>/dev/random</backend>
	I1104 10:54:15.080230   37715 main.go:141] libmachine: (ha-931571-m03)     </rng>
	I1104 10:54:15.080236   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080243   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080248   37715 main.go:141] libmachine: (ha-931571-m03)   </devices>
	I1104 10:54:15.080254   37715 main.go:141] libmachine: (ha-931571-m03) </domain>
	I1104 10:54:15.080261   37715 main.go:141] libmachine: (ha-931571-m03) 
	I1104 10:54:15.087034   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:1d:68:f5 in network default
	I1104 10:54:15.087544   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring networks are active...
	I1104 10:54:15.087568   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:15.088354   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network default is active
	I1104 10:54:15.088653   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network mk-ha-931571 is active
	I1104 10:54:15.089053   37715 main.go:141] libmachine: (ha-931571-m03) Getting domain xml...
	I1104 10:54:15.089835   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:16.314267   37715 main.go:141] libmachine: (ha-931571-m03) Waiting to get IP...
	I1104 10:54:16.315295   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.315802   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.315837   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.315784   38465 retry.go:31] will retry after 211.49676ms: waiting for machine to come up
	I1104 10:54:16.528417   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.528897   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.528927   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.528846   38465 retry.go:31] will retry after 340.441068ms: waiting for machine to come up
	I1104 10:54:16.871525   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.871971   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.871997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.871910   38465 retry.go:31] will retry after 446.439393ms: waiting for machine to come up
	I1104 10:54:17.319543   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.320106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.320137   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.320042   38465 retry.go:31] will retry after 381.839641ms: waiting for machine to come up
	I1104 10:54:17.703288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.703811   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.703840   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.703750   38465 retry.go:31] will retry after 593.813893ms: waiting for machine to come up
	I1104 10:54:18.299510   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:18.300023   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:18.300055   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:18.299939   38465 retry.go:31] will retry after 849.789348ms: waiting for machine to come up
	I1104 10:54:19.151490   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:19.151964   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:19.151988   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:19.151922   38465 retry.go:31] will retry after 1.150337712s: waiting for machine to come up
	I1104 10:54:20.303915   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:20.304325   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:20.304357   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:20.304278   38465 retry.go:31] will retry after 1.472559033s: waiting for machine to come up
	I1104 10:54:21.778305   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:21.778784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:21.778810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:21.778723   38465 retry.go:31] will retry after 1.37004444s: waiting for machine to come up
	I1104 10:54:23.150404   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:23.150868   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:23.150895   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:23.150820   38465 retry.go:31] will retry after 1.893583796s: waiting for machine to come up
	I1104 10:54:25.045832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:25.046288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:25.046327   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:25.046279   38465 retry.go:31] will retry after 2.056345872s: waiting for machine to come up
	I1104 10:54:27.105382   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:27.105822   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:27.105853   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:27.105789   38465 retry.go:31] will retry after 3.414780128s: waiting for machine to come up
	I1104 10:54:30.521832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:30.522159   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:30.522181   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:30.522080   38465 retry.go:31] will retry after 3.340201347s: waiting for machine to come up
	I1104 10:54:33.865562   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:33.865973   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:33.866003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:33.865938   38465 retry.go:31] will retry after 5.278208954s: waiting for machine to come up
	I1104 10:54:39.149712   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150250   37715 main.go:141] libmachine: (ha-931571-m03) Found IP for machine: 192.168.39.57
	I1104 10:54:39.150283   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has current primary IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150292   37715 main.go:141] libmachine: (ha-931571-m03) Reserving static IP address...
	I1104 10:54:39.150676   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find host DHCP lease matching {name: "ha-931571-m03", mac: "52:54:00:30:f5:de", ip: "192.168.39.57"} in network mk-ha-931571
	I1104 10:54:39.223412   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Getting to WaitForSSH function...
	I1104 10:54:39.223438   37715 main.go:141] libmachine: (ha-931571-m03) Reserved static IP address: 192.168.39.57
	I1104 10:54:39.223450   37715 main.go:141] libmachine: (ha-931571-m03) Waiting for SSH to be available...
	I1104 10:54:39.226810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227204   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.227229   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH client type: external
	I1104 10:54:39.227440   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa (-rw-------)
	I1104 10:54:39.227467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:54:39.227480   37715 main.go:141] libmachine: (ha-931571-m03) DBG | About to run SSH command:
	I1104 10:54:39.227493   37715 main.go:141] libmachine: (ha-931571-m03) DBG | exit 0
	I1104 10:54:39.348849   37715 main.go:141] libmachine: (ha-931571-m03) DBG | SSH cmd err, output: <nil>: 
	I1104 10:54:39.349130   37715 main.go:141] libmachine: (ha-931571-m03) KVM machine creation complete!
	I1104 10:54:39.349458   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:39.350011   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350175   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350318   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:54:39.350330   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 10:54:39.351463   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:54:39.351478   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:54:39.351482   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:54:39.351487   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.353807   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.354143   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.354557   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354742   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354871   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.355021   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.355223   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.355234   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:54:39.452207   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.452228   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:54:39.452237   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.455314   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.455778   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.455805   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.456043   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.456250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456440   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456603   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.456750   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.456931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.456953   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:54:39.553854   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:54:39.553946   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:54:39.553963   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:54:39.553975   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554231   37715 buildroot.go:166] provisioning hostname "ha-931571-m03"
	I1104 10:54:39.554253   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554456   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.556992   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557348   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.557377   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557532   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.557736   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.557887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.558007   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.558172   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.558399   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.558418   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m03 && echo "ha-931571-m03" | sudo tee /etc/hostname
	I1104 10:54:39.670668   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m03
	
	I1104 10:54:39.670701   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.674148   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.674492   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674738   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.674887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675053   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.675459   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.675678   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.675703   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:54:39.782022   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.782049   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:54:39.782068   37715 buildroot.go:174] setting up certificates
	I1104 10:54:39.782080   37715 provision.go:84] configureAuth start
	I1104 10:54:39.782091   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.782349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:39.785051   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785459   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.785488   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785656   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.787833   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788124   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.788141   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788305   37715 provision.go:143] copyHostCerts
	I1104 10:54:39.788334   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788369   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:54:39.788378   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788442   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:54:39.788557   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788577   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:54:39.788584   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788610   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:54:39.788656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788673   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:54:39.788679   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788700   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:54:39.788771   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m03 san=[127.0.0.1 192.168.39.57 ha-931571-m03 localhost minikube]
	I1104 10:54:39.906066   37715 provision.go:177] copyRemoteCerts
	I1104 10:54:39.906121   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:54:39.906156   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.909171   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909602   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.909633   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909904   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.910114   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.910451   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.910562   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:39.986932   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:54:39.986995   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:54:40.011798   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:54:40.011899   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:54:40.035728   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:54:40.035811   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:54:40.058737   37715 provision.go:87] duration metric: took 276.643486ms to configureAuth
	I1104 10:54:40.058767   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:54:40.058982   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:40.059060   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.061592   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.061918   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.061947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.062136   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.062313   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062493   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062627   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.062779   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.062931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.062946   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:54:40.285341   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:54:40.285362   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:54:40.285369   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetURL
	I1104 10:54:40.286607   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using libvirt version 6000000
	I1104 10:54:40.288784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289099   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.289130   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289303   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:54:40.289319   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:54:40.289326   37715 client.go:171] duration metric: took 25.618399312s to LocalClient.Create
	I1104 10:54:40.289350   37715 start.go:167] duration metric: took 25.618478892s to libmachine.API.Create "ha-931571"
	I1104 10:54:40.289362   37715 start.go:293] postStartSetup for "ha-931571-m03" (driver="kvm2")
	I1104 10:54:40.289391   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:54:40.289407   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.289628   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:54:40.289653   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.291922   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292338   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.292358   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292590   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.292774   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.292922   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.293081   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.371198   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:54:40.375533   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:54:40.375563   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:54:40.375682   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:54:40.375780   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:54:40.375790   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:54:40.375871   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:54:40.385684   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:40.408674   37715 start.go:296] duration metric: took 119.284792ms for postStartSetup
	I1104 10:54:40.408723   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:40.409449   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.412211   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412561   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.412589   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412888   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:40.413122   37715 start.go:128] duration metric: took 25.760559258s to createHost
	I1104 10:54:40.413150   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.415473   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415825   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.415846   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415970   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.416207   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416371   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416538   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.416702   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.416875   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.416888   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:54:40.513907   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717680.493900775
	
	I1104 10:54:40.513930   37715 fix.go:216] guest clock: 1730717680.493900775
	I1104 10:54:40.513937   37715 fix.go:229] Guest: 2024-11-04 10:54:40.493900775 +0000 UTC Remote: 2024-11-04 10:54:40.413138421 +0000 UTC m=+139.084656658 (delta=80.762354ms)
	I1104 10:54:40.513952   37715 fix.go:200] guest clock delta is within tolerance: 80.762354ms
	I1104 10:54:40.513957   37715 start.go:83] releasing machines lock for "ha-931571-m03", held for 25.861527752s
	I1104 10:54:40.513977   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.514219   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.516861   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.517293   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.517318   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.519824   37715 out.go:177] * Found network options:
	I1104 10:54:40.521282   37715 out.go:177]   - NO_PROXY=192.168.39.67,192.168.39.245
	W1104 10:54:40.522546   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.522569   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.522586   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523386   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523502   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:54:40.523543   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	W1104 10:54:40.523621   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.523648   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.523705   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:54:40.523726   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.526526   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526600   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526878   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526907   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526933   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.527005   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527307   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527380   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527467   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527533   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.527573   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527722   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.761284   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:54:40.766951   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:54:40.767028   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:54:40.784061   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:54:40.784083   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:54:40.784139   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:54:40.799767   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:54:40.814033   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:54:40.814100   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:54:40.828095   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:54:40.843053   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:54:40.959422   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:54:41.119792   37715 docker.go:233] disabling docker service ...
	I1104 10:54:41.119859   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:54:41.134123   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:54:41.147262   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:54:41.281486   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:54:41.401330   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:54:41.415018   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:54:41.433640   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:54:41.433713   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.444506   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:54:41.444582   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.456767   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.467306   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.477809   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:54:41.488160   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.498689   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.515679   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.526763   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:54:41.536412   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:54:41.536469   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:54:41.549448   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:54:41.559807   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:41.665655   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:54:41.758091   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:54:41.758187   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:54:41.762517   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:54:41.762572   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:54:41.766429   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:54:41.804303   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:54:41.804420   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.830473   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.860302   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:54:41.861621   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:54:41.863004   37715 out.go:177]   - env NO_PROXY=192.168.39.67,192.168.39.245
	I1104 10:54:41.864263   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:41.867052   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867423   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:41.867446   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867651   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:54:41.871716   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:41.884015   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:54:41.884230   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:41.884480   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.884518   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.900117   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I1104 10:54:41.900610   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.901163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.901184   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.901516   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.901701   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:54:41.903124   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:41.903396   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.903433   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.918029   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I1104 10:54:41.918566   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.919028   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.919050   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.919333   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.919520   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:41.919673   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.57
	I1104 10:54:41.919684   37715 certs.go:194] generating shared ca certs ...
	I1104 10:54:41.919697   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:41.919810   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:54:41.919845   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:54:41.919854   37715 certs.go:256] generating profile certs ...
	I1104 10:54:41.919922   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:54:41.919946   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd
	I1104 10:54:41.919960   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 10:54:42.049039   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd ...
	I1104 10:54:42.049068   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd: {Name:mk425b204dd51c6129591dbbf4cda0b66e34eb56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049239   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd ...
	I1104 10:54:42.049250   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd: {Name:mk1230635dbd65cb8c7d025a3549f17dc35e060e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049322   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:54:42.049449   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:54:42.049564   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:54:42.049580   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:54:42.049595   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:54:42.049608   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:54:42.049621   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:54:42.049634   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:54:42.049647   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:54:42.049657   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:54:42.049669   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:54:42.049713   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:54:42.049741   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:54:42.049750   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:54:42.049771   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:54:42.049799   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:54:42.049819   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:54:42.049855   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:42.049880   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.049893   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.049905   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.049934   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:42.052637   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053074   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:42.053102   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053289   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:42.053475   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:42.053607   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:42.053769   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:42.125617   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:54:42.129901   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:54:42.141111   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:54:42.145054   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:54:42.154954   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:54:42.158822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:54:42.168976   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:54:42.172887   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:54:42.182649   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:54:42.186455   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:54:42.196466   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:54:42.200376   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:54:42.211239   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:54:42.236618   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:54:42.260726   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:54:42.283147   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:54:42.305271   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1104 10:54:42.327703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:54:42.350340   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:54:42.372114   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:54:42.394125   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:54:42.415761   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:54:42.437284   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:54:42.458545   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:54:42.474091   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:54:42.489871   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:54:42.505378   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:54:42.521116   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:54:42.537323   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:54:42.553306   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:54:42.569157   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:54:42.574422   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:54:42.584560   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588538   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588592   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.594056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:54:42.604559   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:54:42.615717   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619821   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619868   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.625153   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:54:42.638993   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:54:42.649427   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653431   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653483   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.658834   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:54:42.670960   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:54:42.675173   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:54:42.675237   37715 kubeadm.go:934] updating node {m03 192.168.39.57 8443 v1.31.2 crio true true} ...
	I1104 10:54:42.675332   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:54:42.675370   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:54:42.675419   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:54:42.692549   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:54:42.692627   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:54:42.692680   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.702705   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:54:42.702768   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.712640   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:54:42.712662   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712660   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1104 10:54:42.712682   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712648   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1104 10:54:42.712715   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712727   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712752   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:42.718694   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:54:42.718732   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:54:42.746213   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.746221   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:54:42.746258   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:54:42.746334   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.789088   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:54:42.789130   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:54:43.556894   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:54:43.566649   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1104 10:54:43.583297   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:54:43.599783   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:54:43.615935   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:54:43.619736   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:43.632102   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:43.769468   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:54:43.787176   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:43.787522   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:43.787559   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:43.803438   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I1104 10:54:43.803811   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:43.804247   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:43.804266   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:43.804582   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:43.804752   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:43.804873   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:54:43.805017   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:54:43.805035   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:43.808407   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808840   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:43.808868   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808996   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:43.809168   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:43.809326   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:43.809457   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:43.953404   37715 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:43.953450   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443"
	I1104 10:55:05.442467   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443": (21.488974658s)
	I1104 10:55:05.442503   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:55:05.990844   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m03 minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:55:06.139537   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:55:06.285616   37715 start.go:319] duration metric: took 22.480737326s to joinCluster
	I1104 10:55:06.285694   37715 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:55:06.286003   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:55:06.288554   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:55:06.289975   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:55:06.546650   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:55:06.605631   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:55:06.605981   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:55:06.606063   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:55:06.606329   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:06.606418   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:06.606434   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:06.606445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:06.606456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:06.609914   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.107514   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.107534   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.111083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.606560   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.606587   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.606600   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.606605   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.613411   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:55:08.107538   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.107560   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.107567   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.107570   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.606539   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.606559   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.606567   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.606571   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.609675   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.610356   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:09.106606   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.106630   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.106639   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.106644   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.109657   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:09.607102   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.607123   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.607131   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.607135   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.610601   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.106839   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.106861   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.106872   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.106887   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.110421   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.607151   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.607178   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.607190   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.607195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.610313   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.611052   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:11.107465   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.107489   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.107500   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.107505   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.134933   37715 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1104 10:55:11.607114   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.607137   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.607145   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.607149   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.610404   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.107512   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.107532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.606667   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.606689   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.606701   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.606705   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.609952   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.106734   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.106769   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.106780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.106786   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.110063   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.110550   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:13.607192   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.607222   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.607237   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.607241   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.610250   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:14.106526   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.106548   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.106556   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.106560   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.110076   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:14.606584   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.606604   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.606612   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.606622   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.609643   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.106797   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.106819   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.106826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.106830   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.110526   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.111303   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:15.606581   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.606631   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.606643   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.606648   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.609879   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.107000   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.107025   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.107036   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.107042   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.110279   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.607359   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.607381   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.607391   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.607398   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.610655   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.106684   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.106706   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.106716   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.106722   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.109976   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.607162   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.607182   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.607190   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.607194   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.610739   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.611443   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:18.106827   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.106850   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.106858   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.106862   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.110271   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:18.607389   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.607411   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.607419   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.607422   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.612587   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:19.106763   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.106784   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.106791   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.106795   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.110156   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:19.607506   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.607532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.607540   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.607545   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.611651   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:19.612446   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:20.107336   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.107356   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.107364   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.107368   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.110541   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:20.607455   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.607477   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.607485   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.607488   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.610742   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:21.106794   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.106815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.106823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.106827   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.109773   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:21.607002   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.607022   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.607030   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.607033   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.609863   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:22.106940   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.106962   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.106970   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.106981   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.110219   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:22.110873   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:22.607233   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.607256   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.607267   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.607272   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.610320   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.107234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.107261   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.107272   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.107278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.110559   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.607522   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.607544   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.607552   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.607557   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.610843   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.611437   37715 node_ready.go:49] node "ha-931571-m03" has status "Ready":"True"
	I1104 10:55:23.611454   37715 node_ready.go:38] duration metric: took 17.005106707s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:23.611469   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:23.611529   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:23.611538   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.611545   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.611550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.616487   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:23.623329   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.623422   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:55:23.623428   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.623436   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.623440   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.626812   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.627478   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.627500   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.627509   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.627513   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.630024   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.630705   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.630725   37715 pod_ready.go:82] duration metric: took 7.365313ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630737   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:55:23.630815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.630826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.630835   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.633089   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.633668   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.633688   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.633703   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.633714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.635922   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.636490   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.636510   37715 pod_ready.go:82] duration metric: took 5.760939ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636522   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636583   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:55:23.636592   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.636602   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.636610   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.639359   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.639900   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.639915   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.639922   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.639925   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.642474   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.642946   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.642963   37715 pod_ready.go:82] duration metric: took 6.432226ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.642971   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.643028   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:55:23.643036   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.643043   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.643047   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.645331   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.646060   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:23.646073   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.646080   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.646084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.648315   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.648847   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.648862   37715 pod_ready.go:82] duration metric: took 5.88444ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.648869   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.808246   37715 request.go:632] Waited for 159.312664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808309   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.808316   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.808320   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.811540   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.007952   37715 request.go:632] Waited for 195.768208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008033   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008045   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.008056   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.008066   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.011083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.011703   37715 pod_ready.go:93] pod "etcd-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.011724   37715 pod_ready.go:82] duration metric: took 362.848542ms for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.011739   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.207843   37715 request.go:632] Waited for 196.043868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207918   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207925   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.207937   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.207947   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.211127   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.408352   37715 request.go:632] Waited for 196.308065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408442   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408450   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.408460   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.408469   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.411644   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.412279   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.412297   37715 pod_ready.go:82] duration metric: took 400.550124ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.412310   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.608501   37715 request.go:632] Waited for 196.123497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608572   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608580   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.608590   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.608596   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.612062   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.808253   37715 request.go:632] Waited for 195.326237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808332   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808343   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.808352   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.808358   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.811435   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.811848   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.811868   37715 pod_ready.go:82] duration metric: took 399.549963ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.811877   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.008126   37715 request.go:632] Waited for 196.158524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008216   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008224   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.008232   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.008237   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.011898   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.207886   37715 request.go:632] Waited for 195.224715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207967   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207975   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.207983   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.207987   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.211174   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.211794   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.211815   37715 pod_ready.go:82] duration metric: took 399.930178ms for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.211828   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.407990   37715 request.go:632] Waited for 196.084804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408049   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408054   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.408062   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.408065   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.411212   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.608267   37715 request.go:632] Waited for 196.399136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608341   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608348   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.608358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.608363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.611599   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.612277   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.612297   37715 pod_ready.go:82] duration metric: took 400.459599ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.612307   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.808295   37715 request.go:632] Waited for 195.907201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808358   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808364   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.808371   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.808379   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.811856   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.007942   37715 request.go:632] Waited for 195.386929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008009   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008020   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.008034   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.008043   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.010794   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.011251   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.011269   37715 pod_ready.go:82] duration metric: took 398.955793ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.011279   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.207834   37715 request.go:632] Waited for 196.482261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207909   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.207934   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.207939   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.211083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.407914   37715 request.go:632] Waited for 196.093119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407994   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407999   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.408006   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.408012   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.411522   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.412011   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.412034   37715 pod_ready.go:82] duration metric: took 400.747328ms for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.412048   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.608324   37715 request.go:632] Waited for 196.200888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608407   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608414   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.608430   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.608437   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.611990   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.808246   37715 request.go:632] Waited for 195.355588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808300   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.808308   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.808311   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.811118   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.811682   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.811705   37715 pod_ready.go:82] duration metric: took 399.648214ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.811718   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.008596   37715 request.go:632] Waited for 196.775543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008677   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.008685   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.008691   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.012209   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.208175   37715 request.go:632] Waited for 195.363562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208240   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.208247   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.208250   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.211552   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.212061   37715 pod_ready.go:93] pod "kube-proxy-ttq4z" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.212084   37715 pod_ready.go:82] duration metric: took 400.357853ms for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.212098   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.408120   37715 request.go:632] Waited for 195.934645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408180   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.408188   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.408194   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.411594   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.607502   37715 request.go:632] Waited for 195.309631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607589   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607599   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.607611   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.607621   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.610707   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.611551   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.611571   37715 pod_ready.go:82] duration metric: took 399.465223ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.611584   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.807587   37715 request.go:632] Waited for 195.935372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807677   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807686   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.807694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.807697   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.810852   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.007894   37715 request.go:632] Waited for 196.377136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007943   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007948   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.007955   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.007959   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.010780   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:28.011225   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.011242   37715 pod_ready.go:82] duration metric: took 399.65101ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.011252   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.208327   37715 request.go:632] Waited for 197.007106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208398   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208406   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.208412   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.208417   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.211868   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.407823   37715 request.go:632] Waited for 195.386338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407915   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.407929   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.407936   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.411100   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.411750   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.411766   37715 pod_ready.go:82] duration metric: took 400.505326ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.411776   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.607873   37715 request.go:632] Waited for 196.030747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607978   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607989   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.607996   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.607999   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.611695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.807696   37715 request.go:632] Waited for 195.284295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807770   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807776   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.807783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.807788   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.811278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.812008   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.812025   37715 pod_ready.go:82] duration metric: took 400.242831ms for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.812037   37715 pod_ready.go:39] duration metric: took 5.200555034s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:28.812050   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:55:28.812101   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:55:28.825529   37715 api_server.go:72] duration metric: took 22.539799278s to wait for apiserver process to appear ...
	I1104 10:55:28.825558   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:55:28.825578   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:55:28.829724   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:55:28.829787   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:55:28.829795   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.829803   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.829807   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.830888   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:55:28.830964   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:55:28.830984   37715 api_server.go:131] duration metric: took 5.41894ms to wait for apiserver health ...
	I1104 10:55:28.830996   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:55:29.008134   37715 request.go:632] Waited for 177.060621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008207   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008237   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.008252   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.008298   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.014200   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.021556   37715 system_pods.go:59] 24 kube-system pods found
	I1104 10:55:29.021592   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.021600   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.021611   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.021616   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.021627   37715 system_pods.go:61] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.021633   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.021643   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.021649   37715 system_pods.go:61] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.021653   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.021658   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.021673   37715 system_pods.go:61] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.021679   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.021684   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.021689   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.021694   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.021703   37715 system_pods.go:61] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.021708   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.021714   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.021718   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.021723   37715 system_pods.go:61] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.021739   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021748   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021757   37715 system_pods.go:61] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021768   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.021776   37715 system_pods.go:74] duration metric: took 190.77233ms to wait for pod list to return data ...
	I1104 10:55:29.021785   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:55:29.207606   37715 request.go:632] Waited for 185.728415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207676   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.207686   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.207695   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.218692   37715 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1104 10:55:29.218828   37715 default_sa.go:45] found service account: "default"
	I1104 10:55:29.218847   37715 default_sa.go:55] duration metric: took 197.054864ms for default service account to be created ...
	I1104 10:55:29.218857   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:55:29.408474   37715 request.go:632] Waited for 189.535523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408534   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408539   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.408546   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.408550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.414296   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.422499   37715 system_pods.go:86] 24 kube-system pods found
	I1104 10:55:29.422532   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.422537   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.422541   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.422545   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.422549   37715 system_pods.go:89] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.422553   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.422557   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.422560   37715 system_pods.go:89] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.422563   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.422567   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.422571   37715 system_pods.go:89] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.422576   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.422582   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.422588   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.422593   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.422598   37715 system_pods.go:89] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.422604   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.422614   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.422621   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.422624   37715 system_pods.go:89] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.422633   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422642   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422650   37715 system_pods.go:89] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422656   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.422665   37715 system_pods.go:126] duration metric: took 203.801845ms to wait for k8s-apps to be running ...
	I1104 10:55:29.422676   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:55:29.422727   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:55:29.439259   37715 system_svc.go:56] duration metric: took 16.56809ms WaitForService to wait for kubelet
	I1104 10:55:29.439296   37715 kubeadm.go:582] duration metric: took 23.153569026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:55:29.439318   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:55:29.607660   37715 request.go:632] Waited for 168.244277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607713   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607718   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.607726   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.607732   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.611371   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:29.612755   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612781   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612794   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612800   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612807   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612811   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612817   37715 node_conditions.go:105] duration metric: took 173.492197ms to run NodePressure ...
	I1104 10:55:29.612832   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:55:29.612860   37715 start.go:255] writing updated cluster config ...
	I1104 10:55:29.613201   37715 ssh_runner.go:195] Run: rm -f paused
	I1104 10:55:29.662232   37715 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 10:55:29.664453   37715 out.go:177] * Done! kubectl is now configured to use "ha-931571" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.300175015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717954300151937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=724a2003-36c6-4751-88a4-59f711f56243 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.300754356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebff1a51-9dbe-4981-ac47-070dc19e0c44 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.300819862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebff1a51-9dbe-4981-ac47-070dc19e0c44 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.301467788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebff1a51-9dbe-4981-ac47-070dc19e0c44 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.337088994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f31cd0b1-94c0-471a-be5c-b4d613f938d0 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.337163339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f31cd0b1-94c0-471a-be5c-b4d613f938d0 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.338071808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=044cba12-e3c5-47b4-81d5-b7ac9767619a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.338489797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717954338470892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=044cba12-e3c5-47b4-81d5-b7ac9767619a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.339057594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c9455a0-2980-4447-b9d6-5f38d8575d1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.339113049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c9455a0-2980-4447-b9d6-5f38d8575d1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.339338906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c9455a0-2980-4447-b9d6-5f38d8575d1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.374814319Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=664d25b3-28a7-4a32-8a8f-edf88aed9576 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.374909062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=664d25b3-28a7-4a32-8a8f-edf88aed9576 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.376271576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ab503f1-bd6a-4182-a747-91946add1c88 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.376763479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717954376738859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ab503f1-bd6a-4182-a747-91946add1c88 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.377367413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4215ece-52ad-4403-96b0-91c4b07610d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.377440842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4215ece-52ad-4403-96b0-91c4b07610d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.377701812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4215ece-52ad-4403-96b0-91c4b07610d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.414215749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72107638-e068-4c53-b5f3-db6ac9748d52 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.414307452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72107638-e068-4c53-b5f3-db6ac9748d52 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.415316756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b904739-6cb6-4a99-bf71-34d87b226004 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.415865305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717954415841488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b904739-6cb6-4a99-bf71-34d87b226004 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.416376152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c0b9f89-994c-4745-95d7-34a542c7c978 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.416434841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c0b9f89-994c-4745-95d7-34a542c7c978 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:14 ha-931571 crio[659]: time="2024-11-04 10:59:14.416647582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c0b9f89-994c-4745-95d7-34a542c7c978 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	801830521b8c6       77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488                                      20 seconds ago      Exited              kube-vip                  7                   c376c65bb2b6b       kube-vip-ha-931571
	ecc02a44b9547       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ca422d1f835b4       busybox-7dff88458-nslmz
	400aa38b53356       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c6e22705ccc18       coredns-7c65d6cfc9-s9wb4
	49e75724c5ead       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   bcbca8745afa7       coredns-7c65d6cfc9-5ss4v
	f8efbd7a72ea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b15baa796a09e       storage-provisioner
	4401315f385bf       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   220337aaf496c       kindnet-2n2ws
	6e592fe17c5f7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   88e06a89dd6f2       kube-proxy-bvk6r
	e50ab0290e7c2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   b36f0d25b985a       kube-scheduler-ha-931571
	4572c8bcb28cd       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   9659e6073c7ae       kube-controller-manager-ha-931571
	82e4be064be10       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d779a632ccdca       kube-apiserver-ha-931571
	f2d32daf142ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   76529e2f353a6       etcd-ha-931571
	
	
	==> coredns [400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457] <==
	[INFO] 10.244.0.4:50237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150549s
	[INFO] 10.244.0.4:46253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001843568s
	[INFO] 10.244.0.4:55713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184256s
	[INFO] 10.244.0.4:40615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215052s
	[INFO] 10.244.0.4:48280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078576s
	[INFO] 10.244.0.4:54787 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130955s
	[INFO] 10.244.1.2:58741 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002139116s
	[INFO] 10.244.1.2:37960 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110836s
	[INFO] 10.244.1.2:58623 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109212s
	[INFO] 10.244.1.2:51618 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00158249s
	[INFO] 10.244.1.2:43015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087484s
	[INFO] 10.244.1.2:39492 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171988s
	[INFO] 10.244.2.2:48038 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132123s
	[INFO] 10.244.0.4:35814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180509s
	[INFO] 10.244.0.4:60410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089999s
	[INFO] 10.244.0.4:47053 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039998s
	[INFO] 10.244.1.2:58250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164547s
	[INFO] 10.244.1.2:52533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169574s
	[INFO] 10.244.2.2:44494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181065s
	[INFO] 10.244.2.2:58013 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023451s
	[INFO] 10.244.2.2:52479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131262s
	[INFO] 10.244.0.4:40569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209971s
	[INFO] 10.244.0.4:39524 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112991s
	[INFO] 10.244.0.4:47233 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143713s
	[INFO] 10.244.1.2:40992 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169174s
	
	
	==> coredns [49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48964 - 23647 "HINFO IN 8987446281611230695.8255749056578627230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085188681s
	[INFO] 10.244.2.2:34961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003596703s
	[INFO] 10.244.0.4:37004 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010865s
	[INFO] 10.244.0.4:53184 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001905017s
	[INFO] 10.244.1.2:58428 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083838s
	[INFO] 10.244.1.2:60855 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001943834s
	[INFO] 10.244.2.2:42530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210297s
	[INFO] 10.244.2.2:45691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000254098s
	[INFO] 10.244.2.2:54453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116752s
	[INFO] 10.244.0.4:49389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000239128s
	[INFO] 10.244.0.4:50445 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078508s
	[INFO] 10.244.1.2:33136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123784s
	[INFO] 10.244.1.2:60974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079916s
	[INFO] 10.244.2.2:49080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171041s
	[INFO] 10.244.2.2:43340 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142924s
	[INFO] 10.244.2.2:43789 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094712s
	[INFO] 10.244.0.4:32943 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072704s
	[INFO] 10.244.1.2:50464 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118885s
	[INFO] 10.244.1.2:36951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148048s
	[INFO] 10.244.2.2:50644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135678s
	[INFO] 10.244.0.4:38496 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001483s
	[INFO] 10.244.1.2:59424 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211313s
	[INFO] 10.244.1.2:33660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134208s
	[INFO] 10.244.1.2:34489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138513s
	
	
	==> describe nodes <==
	Name:               ha-931571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:53:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-931571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5397aa0c862f4705b75b9757490651ea
	  System UUID:                5397aa0c-862f-4705-b75b-9757490651ea
	  Boot ID:                    17751c92-c71f-4e82-afb4-12da82035155
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nslmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-5ss4v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-s9wb4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-931571                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-2n2ws                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-931571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-931571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-bvk6r                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-931571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-931571                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m10s  kube-proxy       
	  Normal  Starting                 6m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s  kubelet          Node ha-931571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s  kubelet          Node ha-931571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s  kubelet          Node ha-931571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  NodeReady                5m56s  kubelet          Node ha-931571 status is now: NodeReady
	  Normal  RegisteredNode           5m17s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	
	
	Name:               ha-931571-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:53:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:56:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-931571-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06772ff96588423e9dc77ed49845e534
	  System UUID:                06772ff9-6588-423e-9dc7-7ed49845e534
	  Boot ID:                    74d940a3-5941-40ed-b058-45da0bd2f171
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9wmp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-931571-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-bg4z6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-931571-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-controller-manager-ha-931571-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-wz92s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-931571-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-vip-ha-931571-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-931571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-931571-m02 status is now: NodeNotReady
	
	
	Name:               ha-931571-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:55:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-931571-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b21e133cd17b4b699323cc6d9f47f565
	  System UUID:                b21e133c-d17b-4b69-9323-cc6d9f47f565
	  Boot ID:                    50ec73f3-3253-4df5-83ed-277786faa385
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lqgb9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-931571-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-w2jwt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-931571-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-931571-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-ttq4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-931571-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-931571-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m11s                  cidrAllocator    Node ha-931571-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m12s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m12s)  kubelet          Node ha-931571-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m12s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	
	
	Name:               ha-931571-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-931571-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 851b57db90dc4e65909090eed2536ea8
	  System UUID:                851b57db-90dc-4e65-9090-90eed2536ea8
	  Boot ID:                    be99e848-d7b5-4c3a-990d-5dd7890c841c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x8ptv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-s8gg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m3s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     3m8s                 cidrAllocator    Node ha-931571-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node ha-931571-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-931571-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 4 10:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047726] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036586] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779631] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.763191] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.537421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.904587] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.060497] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062176] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.155966] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.126824] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.243725] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.719760] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.831679] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.057052] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249250] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.693317] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[Nov 4 10:53] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.046787] kauditd_printk_skb: 41 callbacks suppressed
	[ +27.005860] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c] <==
	{"level":"warn","ts":"2024-11-04T10:59:14.667175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.674005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.678819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.692649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.699972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.705636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.708698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.711907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.718818Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.723012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.725151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.730406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.734270Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.737835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.742693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.748007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.753591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.757239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.759921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.763521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.768049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.771429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.772129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.781847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:14.792614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:59:14 up 6 min,  0 users,  load average: 0.26, 0.32, 0.15
	Linux ha-931571 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0] <==
	I1104 10:58:37.932891       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:58:47.933057       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:58:47.933141       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:58:47.933340       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:58:47.933365       1 main.go:301] handling current node
	I1104 10:58:47.933390       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:58:47.933406       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:58:47.933512       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:58:47.933532       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:58:57.931888       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:58:57.931969       1 main.go:301] handling current node
	I1104 10:58:57.931997       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:58:57.932015       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:58:57.932703       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:58:57.932784       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:58:57.933003       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:58:57.933029       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.925895       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:07.925959       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:07.926150       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:07.926172       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.926258       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:07.926276       1 main.go:301] handling current node
	I1104 10:59:07.926287       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:07.926292       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150] <==
	I1104 10:52:57.529011       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1104 10:52:57.636067       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1104 10:52:58.624832       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1104 10:52:58.639937       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1104 10:52:58.805171       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1104 10:53:03.087294       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1104 10:53:03.287753       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1104 10:53:50.685836       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="2a13690c-2b7c-4af7-94a1-2fcd1065da04"
	E1104 10:53:50.685933       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.903µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1104 10:55:34.753652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57932: use of closed network connection
	E1104 10:55:34.925834       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57948: use of closed network connection
	E1104 10:55:35.093653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57972: use of closed network connection
	E1104 10:55:35.274875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57992: use of closed network connection
	E1104 10:55:35.447438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58008: use of closed network connection
	E1104 10:55:35.612882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58018: use of closed network connection
	E1104 10:55:35.778454       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58044: use of closed network connection
	E1104 10:55:35.949313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58070: use of closed network connection
	E1104 10:55:36.116046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58086: use of closed network connection
	E1104 10:55:36.394559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58120: use of closed network connection
	E1104 10:55:36.560067       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58130: use of closed network connection
	E1104 10:55:36.741903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58146: use of closed network connection
	E1104 10:55:36.920290       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58160: use of closed network connection
	E1104 10:55:37.097281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58172: use of closed network connection
	E1104 10:55:37.276505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58204: use of closed network connection
	W1104 10:57:07.528371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.57 192.168.39.67]
	
	
	==> kube-controller-manager [4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc] <==
	I1104 10:56:02.327738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571"
	I1104 10:56:04.592818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m03"
	I1104 10:56:06.541409       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-931571-m04\" does not exist"
	I1104 10:56:06.575948       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-931571-m04" podCIDRs=["10.244.3.0/24"]
	I1104 10:56:06.576008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.576040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.730053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.090693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.683331       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-931571-m04"
	I1104 10:56:07.724925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.198433       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.234463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:16.862581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:56:26.200074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.386370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:36.943150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:57:21.411213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:57:21.411471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.433152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.545878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.838445ms"
	I1104 10:57:21.546123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.292µs"
	I1104 10:57:22.718407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:26.623482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	
	
	==> kube-proxy [6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 10:53:04.203851       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 10:53:04.229581       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E1104 10:53:04.229781       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 10:53:04.282192       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 10:53:04.282221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 10:53:04.282244       1 server_linux.go:169] "Using iptables Proxier"
	I1104 10:53:04.285593       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 10:53:04.285958       1 server.go:483] "Version info" version="v1.31.2"
	I1104 10:53:04.285985       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 10:53:04.288139       1 config.go:199] "Starting service config controller"
	I1104 10:53:04.288173       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 10:53:04.290392       1 config.go:105] "Starting endpoint slice config controller"
	I1104 10:53:04.290557       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 10:53:04.291547       1 config.go:328] "Starting node config controller"
	I1104 10:53:04.292932       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 10:53:04.389214       1 shared_informer.go:320] Caches are synced for service config
	I1104 10:53:04.391802       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 10:53:04.393273       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c] <==
	W1104 10:52:57.001881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1104 10:52:57.001927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.141748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1104 10:52:57.141796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.201248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1104 10:52:57.201310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1104 10:52:58.585064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 10:55:30.513828       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="641f6861-b035-49a8-832b-70b7a069afb3" pod="default/busybox-7dff88458-lqgb9" assumedNode="ha-931571-m03" currentNode="ha-931571-m02"
	E1104 10:55:30.530615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m02"
	E1104 10:55:30.530773       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 641f6861-b035-49a8-832b-70b7a069afb3(default/busybox-7dff88458-lqgb9) was assumed on ha-931571-m02 but assigned to ha-931571-m03" pod="default/busybox-7dff88458-lqgb9"
	E1104 10:55:30.530821       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" pod="default/busybox-7dff88458-lqgb9"
	I1104 10:55:30.530854       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m03"
	E1104 10:55:30.571464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572521       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 68017266-8187-488d-ab36-2a5af294fa2e(default/busybox-7dff88458-nslmz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nslmz"
	E1104 10:55:30.572641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" pod="default/busybox-7dff88458-nslmz"
	I1104 10:55:30.572740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572411       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.573133       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 84b6e653-b685-4c00-ac2f-d650738a613b(default/busybox-7dff88458-w9wmp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9wmp"
	E1104 10:55:30.573206       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" pod="default/busybox-7dff88458-w9wmp"
	I1104 10:55:30.573228       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.792999       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-5nt9m\" not found" pod="default/busybox-7dff88458-5nt9m"
	E1104 10:56:06.602004       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	E1104 10:56:06.602261       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c786786d-b4b5-4479-b5df-24cc8f346e86(kube-system/kube-proxy-s8gg7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s8gg7"
	E1104 10:56:06.602358       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" pod="kube-system/kube-proxy-s8gg7"
	I1104 10:56:06.602540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	
	
	==> kubelet <==
	Nov 04 10:58:28 ha-931571 kubelet[1360]: E1104 10:58:28.869616    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717908868890680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:28 ha-931571 kubelet[1360]: E1104 10:58:28.869994    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717908868890680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:30 ha-931571 kubelet[1360]: I1104 10:58:30.785581    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:30 ha-931571 kubelet[1360]: E1104 10:58:30.785757    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:58:38 ha-931571 kubelet[1360]: E1104 10:58:38.871501    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717918871014143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:38 ha-931571 kubelet[1360]: E1104 10:58:38.871524    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717918871014143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:42 ha-931571 kubelet[1360]: I1104 10:58:42.786581    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:42 ha-931571 kubelet[1360]: E1104 10:58:42.791316    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872774    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872859    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:53 ha-931571 kubelet[1360]: I1104 10:58:53.785072    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.819237    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874071    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874093    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.144622    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.145089    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: E1104 10:59:00.145270    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878363    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878627    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: I1104 10:59:14.786026    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: E1104 10:59:14.786168    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:261: (dbg) Run:  kubectl --context ha-931571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (149.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1104 10:59:17.025331   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.395899135s)
ha_test.go:415: expected profile "ha-931571" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-931571\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-931571\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-931571\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.67\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.245\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.57\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.237\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevir
t\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\"
,\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.295040656s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m03_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:52:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:52:21.364935   37715 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:52:21.365025   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365032   37715 out.go:358] Setting ErrFile to fd 2...
	I1104 10:52:21.365036   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365213   37715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:52:21.365784   37715 out.go:352] Setting JSON to false
	I1104 10:52:21.366601   37715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5692,"bootTime":1730711849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:52:21.366686   37715 start.go:139] virtualization: kvm guest
	I1104 10:52:21.368805   37715 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:52:21.370048   37715 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:52:21.370105   37715 notify.go:220] Checking for updates...
	I1104 10:52:21.372521   37715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:52:21.373968   37715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:52:21.375378   37715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.376837   37715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:52:21.378230   37715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:52:21.379614   37715 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:52:21.414672   37715 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 10:52:21.416078   37715 start.go:297] selected driver: kvm2
	I1104 10:52:21.416092   37715 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:52:21.416103   37715 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:52:21.416883   37715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.416970   37715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:52:21.432886   37715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:52:21.432946   37715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:52:21.433171   37715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:52:21.433208   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:21.433267   37715 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1104 10:52:21.433278   37715 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1104 10:52:21.433324   37715 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:21.433412   37715 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.435216   37715 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 10:52:21.436574   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:21.436609   37715 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:52:21.436618   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:52:21.436693   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:52:21.436705   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:52:21.436992   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:21.437018   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json: {Name:mke118782614f4d89fa0f6507dfdc64c536a0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:21.437163   37715 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:52:21.437221   37715 start.go:364] duration metric: took 42.218µs to acquireMachinesLock for "ha-931571"
	I1104 10:52:21.437267   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:52:21.437337   37715 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 10:52:21.438936   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:52:21.439063   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:52:21.439107   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:52:21.453699   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1104 10:52:21.454132   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:52:21.454653   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:52:21.454675   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:52:21.455002   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:52:21.455150   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:21.455275   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:21.455438   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:52:21.455470   37715 client.go:168] LocalClient.Create starting
	I1104 10:52:21.455500   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:52:21.455528   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455541   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455581   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:52:21.455599   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455610   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455624   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:52:21.455633   37715 main.go:141] libmachine: (ha-931571) Calling .PreCreateCheck
	I1104 10:52:21.455911   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:21.456291   37715 main.go:141] libmachine: Creating machine...
	I1104 10:52:21.456304   37715 main.go:141] libmachine: (ha-931571) Calling .Create
	I1104 10:52:21.456440   37715 main.go:141] libmachine: (ha-931571) Creating KVM machine...
	I1104 10:52:21.457741   37715 main.go:141] libmachine: (ha-931571) DBG | found existing default KVM network
	I1104 10:52:21.458392   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.458262   37738 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1104 10:52:21.458442   37715 main.go:141] libmachine: (ha-931571) DBG | created network xml: 
	I1104 10:52:21.458465   37715 main.go:141] libmachine: (ha-931571) DBG | <network>
	I1104 10:52:21.458474   37715 main.go:141] libmachine: (ha-931571) DBG |   <name>mk-ha-931571</name>
	I1104 10:52:21.458487   37715 main.go:141] libmachine: (ha-931571) DBG |   <dns enable='no'/>
	I1104 10:52:21.458498   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458510   37715 main.go:141] libmachine: (ha-931571) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1104 10:52:21.458517   37715 main.go:141] libmachine: (ha-931571) DBG |     <dhcp>
	I1104 10:52:21.458526   37715 main.go:141] libmachine: (ha-931571) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1104 10:52:21.458536   37715 main.go:141] libmachine: (ha-931571) DBG |     </dhcp>
	I1104 10:52:21.458547   37715 main.go:141] libmachine: (ha-931571) DBG |   </ip>
	I1104 10:52:21.458556   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458566   37715 main.go:141] libmachine: (ha-931571) DBG | </network>
	I1104 10:52:21.458577   37715 main.go:141] libmachine: (ha-931571) DBG | 
	I1104 10:52:21.463306   37715 main.go:141] libmachine: (ha-931571) DBG | trying to create private KVM network mk-ha-931571 192.168.39.0/24...
	I1104 10:52:21.529269   37715 main.go:141] libmachine: (ha-931571) DBG | private KVM network mk-ha-931571 192.168.39.0/24 created
	I1104 10:52:21.529311   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.529188   37738 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.529329   37715 main.go:141] libmachine: (ha-931571) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.529347   37715 main.go:141] libmachine: (ha-931571) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:52:21.529364   37715 main.go:141] libmachine: (ha-931571) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:52:21.775859   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.775727   37738 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa...
	I1104 10:52:21.860057   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.859924   37738 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk...
	I1104 10:52:21.860086   37715 main.go:141] libmachine: (ha-931571) DBG | Writing magic tar header
	I1104 10:52:21.860102   37715 main.go:141] libmachine: (ha-931571) DBG | Writing SSH key tar header
	I1104 10:52:21.860115   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.860035   37738 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.860131   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571
	I1104 10:52:21.860191   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:52:21.860213   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.860225   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 (perms=drwx------)
	I1104 10:52:21.860235   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:52:21.860254   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:52:21.860267   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:52:21.860276   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:52:21.860287   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home
	I1104 10:52:21.860298   37715 main.go:141] libmachine: (ha-931571) DBG | Skipping /home - not owner
	I1104 10:52:21.860370   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:52:21.860424   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:52:21.860440   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:52:21.860450   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:52:21.860468   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:21.861289   37715 main.go:141] libmachine: (ha-931571) define libvirt domain using xml: 
	I1104 10:52:21.861306   37715 main.go:141] libmachine: (ha-931571) <domain type='kvm'>
	I1104 10:52:21.861313   37715 main.go:141] libmachine: (ha-931571)   <name>ha-931571</name>
	I1104 10:52:21.861320   37715 main.go:141] libmachine: (ha-931571)   <memory unit='MiB'>2200</memory>
	I1104 10:52:21.861328   37715 main.go:141] libmachine: (ha-931571)   <vcpu>2</vcpu>
	I1104 10:52:21.861340   37715 main.go:141] libmachine: (ha-931571)   <features>
	I1104 10:52:21.861356   37715 main.go:141] libmachine: (ha-931571)     <acpi/>
	I1104 10:52:21.861372   37715 main.go:141] libmachine: (ha-931571)     <apic/>
	I1104 10:52:21.861380   37715 main.go:141] libmachine: (ha-931571)     <pae/>
	I1104 10:52:21.861396   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861404   37715 main.go:141] libmachine: (ha-931571)   </features>
	I1104 10:52:21.861416   37715 main.go:141] libmachine: (ha-931571)   <cpu mode='host-passthrough'>
	I1104 10:52:21.861423   37715 main.go:141] libmachine: (ha-931571)   
	I1104 10:52:21.861426   37715 main.go:141] libmachine: (ha-931571)   </cpu>
	I1104 10:52:21.861433   37715 main.go:141] libmachine: (ha-931571)   <os>
	I1104 10:52:21.861437   37715 main.go:141] libmachine: (ha-931571)     <type>hvm</type>
	I1104 10:52:21.861444   37715 main.go:141] libmachine: (ha-931571)     <boot dev='cdrom'/>
	I1104 10:52:21.861448   37715 main.go:141] libmachine: (ha-931571)     <boot dev='hd'/>
	I1104 10:52:21.861452   37715 main.go:141] libmachine: (ha-931571)     <bootmenu enable='no'/>
	I1104 10:52:21.861458   37715 main.go:141] libmachine: (ha-931571)   </os>
	I1104 10:52:21.861462   37715 main.go:141] libmachine: (ha-931571)   <devices>
	I1104 10:52:21.861469   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='cdrom'>
	I1104 10:52:21.861476   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/boot2docker.iso'/>
	I1104 10:52:21.861488   37715 main.go:141] libmachine: (ha-931571)       <target dev='hdc' bus='scsi'/>
	I1104 10:52:21.861492   37715 main.go:141] libmachine: (ha-931571)       <readonly/>
	I1104 10:52:21.861495   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861500   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='disk'>
	I1104 10:52:21.861506   37715 main.go:141] libmachine: (ha-931571)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:52:21.861513   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk'/>
	I1104 10:52:21.861520   37715 main.go:141] libmachine: (ha-931571)       <target dev='hda' bus='virtio'/>
	I1104 10:52:21.861524   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861533   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861538   37715 main.go:141] libmachine: (ha-931571)       <source network='mk-ha-931571'/>
	I1104 10:52:21.861547   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861557   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861566   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861571   37715 main.go:141] libmachine: (ha-931571)       <source network='default'/>
	I1104 10:52:21.861580   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861584   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861591   37715 main.go:141] libmachine: (ha-931571)     <serial type='pty'>
	I1104 10:52:21.861645   37715 main.go:141] libmachine: (ha-931571)       <target port='0'/>
	I1104 10:52:21.861685   37715 main.go:141] libmachine: (ha-931571)     </serial>
	I1104 10:52:21.861703   37715 main.go:141] libmachine: (ha-931571)     <console type='pty'>
	I1104 10:52:21.861714   37715 main.go:141] libmachine: (ha-931571)       <target type='serial' port='0'/>
	I1104 10:52:21.861735   37715 main.go:141] libmachine: (ha-931571)     </console>
	I1104 10:52:21.861744   37715 main.go:141] libmachine: (ha-931571)     <rng model='virtio'>
	I1104 10:52:21.861753   37715 main.go:141] libmachine: (ha-931571)       <backend model='random'>/dev/random</backend>
	I1104 10:52:21.861765   37715 main.go:141] libmachine: (ha-931571)     </rng>
	I1104 10:52:21.861773   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861783   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861791   37715 main.go:141] libmachine: (ha-931571)   </devices>
	I1104 10:52:21.861799   37715 main.go:141] libmachine: (ha-931571) </domain>
	I1104 10:52:21.861809   37715 main.go:141] libmachine: (ha-931571) 
	I1104 10:52:21.865935   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:cf:c5:1d in network default
	I1104 10:52:21.866504   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:21.866522   37715 main.go:141] libmachine: (ha-931571) Ensuring networks are active...
	I1104 10:52:21.866948   37715 main.go:141] libmachine: (ha-931571) Ensuring network default is active
	I1104 10:52:21.867232   37715 main.go:141] libmachine: (ha-931571) Ensuring network mk-ha-931571 is active
	I1104 10:52:21.867627   37715 main.go:141] libmachine: (ha-931571) Getting domain xml...
	I1104 10:52:21.868256   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:23.049161   37715 main.go:141] libmachine: (ha-931571) Waiting to get IP...
	I1104 10:52:23.050233   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.050623   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.050643   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.050602   37738 retry.go:31] will retry after 245.530574ms: waiting for machine to come up
	I1104 10:52:23.298185   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.298678   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.298704   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.298589   37738 retry.go:31] will retry after 317.376406ms: waiting for machine to come up
	I1104 10:52:23.617020   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.617577   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.617605   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.617514   37738 retry.go:31] will retry after 370.038267ms: waiting for machine to come up
	I1104 10:52:23.988831   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.989190   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.989220   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.989148   37738 retry.go:31] will retry after 538.152632ms: waiting for machine to come up
	I1104 10:52:24.528804   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:24.529210   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:24.529252   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:24.529162   37738 retry.go:31] will retry after 731.07349ms: waiting for machine to come up
	I1104 10:52:25.262048   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:25.262502   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:25.262519   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:25.262462   37738 retry.go:31] will retry after 741.011273ms: waiting for machine to come up
	I1104 10:52:26.005553   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.005942   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.005976   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.005909   37738 retry.go:31] will retry after 743.777795ms: waiting for machine to come up
	I1104 10:52:26.751254   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.751560   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.751581   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.751519   37738 retry.go:31] will retry after 895.955115ms: waiting for machine to come up
	I1104 10:52:27.648705   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:27.649070   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:27.649096   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:27.649040   37738 retry.go:31] will retry after 1.225419017s: waiting for machine to come up
	I1104 10:52:28.876413   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:28.876806   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:28.876829   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:28.876782   37738 retry.go:31] will retry after 1.631823926s: waiting for machine to come up
	I1104 10:52:30.510636   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:30.511147   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:30.511177   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:30.511093   37738 retry.go:31] will retry after 1.798258408s: waiting for machine to come up
	I1104 10:52:32.311067   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:32.311528   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:32.311574   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:32.311491   37738 retry.go:31] will retry after 3.573429436s: waiting for machine to come up
	I1104 10:52:35.889088   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:35.889552   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:35.889578   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:35.889516   37738 retry.go:31] will retry after 4.488251667s: waiting for machine to come up
	I1104 10:52:40.382173   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382599   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has current primary IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382621   37715 main.go:141] libmachine: (ha-931571) Found IP for machine: 192.168.39.67
	I1104 10:52:40.382633   37715 main.go:141] libmachine: (ha-931571) Reserving static IP address...
	I1104 10:52:40.383033   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find host DHCP lease matching {name: "ha-931571", mac: "52:54:00:2c:cb:16", ip: "192.168.39.67"} in network mk-ha-931571
	I1104 10:52:40.452346   37715 main.go:141] libmachine: (ha-931571) DBG | Getting to WaitForSSH function...
	I1104 10:52:40.452379   37715 main.go:141] libmachine: (ha-931571) Reserved static IP address: 192.168.39.67
	I1104 10:52:40.452392   37715 main.go:141] libmachine: (ha-931571) Waiting for SSH to be available...
	I1104 10:52:40.456018   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456490   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.456515   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456627   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH client type: external
	I1104 10:52:40.456650   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa (-rw-------)
	I1104 10:52:40.456681   37715 main.go:141] libmachine: (ha-931571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:52:40.456700   37715 main.go:141] libmachine: (ha-931571) DBG | About to run SSH command:
	I1104 10:52:40.456715   37715 main.go:141] libmachine: (ha-931571) DBG | exit 0
	I1104 10:52:40.580862   37715 main.go:141] libmachine: (ha-931571) DBG | SSH cmd err, output: <nil>: 
	I1104 10:52:40.581146   37715 main.go:141] libmachine: (ha-931571) KVM machine creation complete!
	I1104 10:52:40.581410   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:40.581936   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582130   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582294   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:52:40.582307   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:52:40.583398   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:52:40.583412   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:52:40.583418   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:52:40.583425   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.585558   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585865   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.585891   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585991   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.586130   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586272   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586383   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.586519   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.586723   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.586734   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:52:40.692229   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:40.692248   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:52:40.692257   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.695010   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695388   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.695411   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695556   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.695751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.695899   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.696052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.696188   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.696868   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.696890   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:52:40.801468   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:52:40.801552   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:52:40.801563   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:52:40.801571   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801814   37715 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 10:52:40.801836   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801992   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.804318   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804694   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.804723   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804889   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.805051   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805262   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805439   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.805644   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.805826   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.805838   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 10:52:40.921516   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 10:52:40.921540   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.924174   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.924541   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924675   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.924825   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.924941   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.925052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.925210   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.925423   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.925448   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:52:41.036770   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:41.036799   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:52:41.036830   37715 buildroot.go:174] setting up certificates
	I1104 10:52:41.036839   37715 provision.go:84] configureAuth start
	I1104 10:52:41.036848   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:41.037164   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.039662   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.040032   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040164   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.042288   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042624   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.042652   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042756   37715 provision.go:143] copyHostCerts
	I1104 10:52:41.042779   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042808   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:52:41.042823   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042880   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:52:41.042955   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.042972   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:52:41.042979   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.043001   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:52:41.043042   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043058   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:52:41.043064   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043084   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:52:41.043133   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 10:52:41.275942   37715 provision.go:177] copyRemoteCerts
	I1104 10:52:41.275998   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:52:41.276018   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.278984   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279300   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.279324   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279438   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.279611   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.279754   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.279862   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.362606   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:52:41.362673   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:52:41.384103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:52:41.384170   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 10:52:41.405170   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:52:41.405259   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:52:41.426285   37715 provision.go:87] duration metric: took 389.43394ms to configureAuth
	I1104 10:52:41.426311   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:52:41.426499   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:52:41.426580   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.429219   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.429539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.429959   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430107   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430247   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.430417   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.430644   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.430666   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:52:41.649262   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:52:41.649291   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:52:41.649300   37715 main.go:141] libmachine: (ha-931571) Calling .GetURL
	I1104 10:52:41.650723   37715 main.go:141] libmachine: (ha-931571) DBG | Using libvirt version 6000000
	I1104 10:52:41.653499   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.653913   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.653943   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.654070   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:52:41.654084   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:52:41.654091   37715 client.go:171] duration metric: took 20.198612513s to LocalClient.Create
	I1104 10:52:41.654124   37715 start.go:167] duration metric: took 20.198697894s to libmachine.API.Create "ha-931571"
	I1104 10:52:41.654168   37715 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 10:52:41.654182   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:52:41.654199   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.654448   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:52:41.654477   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.656689   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.657028   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657279   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.657484   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.657648   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.657776   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.738934   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:52:41.742902   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:52:41.742925   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:52:41.742997   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:52:41.743084   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:52:41.743095   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:52:41.743212   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:52:41.752124   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:41.774335   37715 start.go:296] duration metric: took 120.149038ms for postStartSetup
	I1104 10:52:41.774411   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:41.775008   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.777422   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.777754   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.777776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.778012   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:41.778186   37715 start.go:128] duration metric: took 20.340838176s to createHost
	I1104 10:52:41.778221   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.780525   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780784   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.780805   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780933   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.781101   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781264   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781386   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.781512   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.781672   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.781683   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:52:41.885593   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717561.859087710
	
	I1104 10:52:41.885616   37715 fix.go:216] guest clock: 1730717561.859087710
	I1104 10:52:41.885624   37715 fix.go:229] Guest: 2024-11-04 10:52:41.85908771 +0000 UTC Remote: 2024-11-04 10:52:41.778208592 +0000 UTC m=+20.449726833 (delta=80.879118ms)
	I1104 10:52:41.885647   37715 fix.go:200] guest clock delta is within tolerance: 80.879118ms
	I1104 10:52:41.885653   37715 start.go:83] releasing machines lock for "ha-931571", held for 20.448400301s
	I1104 10:52:41.885675   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.885953   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.888489   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.888887   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.888909   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.889131   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889647   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889819   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889899   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:52:41.889945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.890032   37715 ssh_runner.go:195] Run: cat /version.json
	I1104 10:52:41.890047   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.892621   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893038   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893065   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893082   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893208   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893350   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.893498   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.893582   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893589   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.893613   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893793   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893936   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.894105   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.894263   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.988130   37715 ssh_runner.go:195] Run: systemctl --version
	I1104 10:52:41.993656   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:52:42.142615   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:52:42.148950   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:52:42.149023   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:52:42.163368   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:52:42.163399   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:52:42.163459   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:52:42.178011   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:52:42.190311   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:52:42.190363   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:52:42.202494   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:52:42.215234   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:52:42.322933   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:52:42.465367   37715 docker.go:233] disabling docker service ...
	I1104 10:52:42.465435   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:52:42.478799   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:52:42.490748   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:52:42.621810   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:52:42.721588   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:52:42.734181   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:52:42.750278   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:52:42.750346   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.759509   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:52:42.759569   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.768912   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.778275   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.791011   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:52:42.801155   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.810365   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.825204   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.834333   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:52:42.842438   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:52:42.842479   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:52:42.853336   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:52:42.861893   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:42.966759   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:52:43.051148   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:52:43.051245   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:52:43.055605   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:52:43.055660   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:52:43.058970   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:52:43.092206   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:52:43.092300   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.119216   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.149822   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:52:43.150920   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:43.153539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.153876   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:43.153903   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.154148   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:52:43.157775   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:43.169819   37715 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 10:52:43.169924   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:43.169983   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:43.198885   37715 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 10:52:43.198949   37715 ssh_runner.go:195] Run: which lz4
	I1104 10:52:43.202346   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1104 10:52:43.202439   37715 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 10:52:43.206081   37715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 10:52:43.206107   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 10:52:44.348916   37715 crio.go:462] duration metric: took 1.146501805s to copy over tarball
	I1104 10:52:44.348982   37715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 10:52:46.326500   37715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97746722s)
	I1104 10:52:46.326527   37715 crio.go:469] duration metric: took 1.977583171s to extract the tarball
	I1104 10:52:46.326535   37715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 10:52:46.361867   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:46.402887   37715 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 10:52:46.402909   37715 cache_images.go:84] Images are preloaded, skipping loading
	I1104 10:52:46.402919   37715 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 10:52:46.403024   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:52:46.403102   37715 ssh_runner.go:195] Run: crio config
	I1104 10:52:46.448114   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:46.448134   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:46.448143   37715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 10:52:46.448161   37715 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 10:52:46.448276   37715 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 10:52:46.448297   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:52:46.448333   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:52:46.464928   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:52:46.465022   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:52:46.465069   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:52:46.473864   37715 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 10:52:46.473931   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 10:52:46.482366   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 10:52:46.497386   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:52:46.512146   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 10:52:46.528415   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1104 10:52:46.544798   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:52:46.548212   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:46.559488   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:46.692494   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:52:46.708806   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 10:52:46.708830   37715 certs.go:194] generating shared ca certs ...
	I1104 10:52:46.708849   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.709027   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:52:46.709089   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:52:46.709102   37715 certs.go:256] generating profile certs ...
	I1104 10:52:46.709156   37715 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:52:46.709175   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt with IP's: []
	I1104 10:52:46.835505   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt ...
	I1104 10:52:46.835534   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt: {Name:mk61f73d1cdbaea56c4e3a41bf4d8a8e998c4601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835713   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key ...
	I1104 10:52:46.835728   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key: {Name:mk3a1e70b98b06ffcf80cad3978790ca4b634404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835832   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66
	I1104 10:52:46.835851   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.254]
	I1104 10:52:46.955700   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 ...
	I1104 10:52:46.955730   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66: {Name:mk7e52761b5f3a6915e1cf90cd8ace0ff40a1698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.955903   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 ...
	I1104 10:52:46.955919   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66: {Name:mk473e5ea437641c8d6be7c8c672068a3ffc879a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.956011   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:52:46.956221   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:52:46.956356   37715 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:52:46.956379   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt with IP's: []
	I1104 10:52:47.101236   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt ...
	I1104 10:52:47.101269   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt: {Name:mk407ac3d668cf899822db436da4d41618f60b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101451   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key ...
	I1104 10:52:47.101466   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key: {Name:mk67291900fae9d34a6dbb5f9ac6f9eff95090cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101560   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:52:47.101583   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:52:47.101600   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:52:47.101617   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:52:47.101636   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:52:47.101656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:52:47.101675   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:52:47.101692   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:52:47.101753   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:52:47.101799   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:52:47.101812   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:52:47.101846   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:52:47.101884   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:52:47.101916   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:52:47.101975   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:47.102014   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.102035   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.102054   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.102621   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:52:47.126053   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:52:47.148030   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:52:47.169097   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:52:47.190790   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 10:52:47.211485   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 10:52:47.233064   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:52:47.254438   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:52:47.275584   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:52:47.296496   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:52:47.316993   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:52:47.338085   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 10:52:47.352830   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:52:47.357992   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:52:47.367171   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371139   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371175   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.376056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:52:47.385217   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:52:47.394305   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398184   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398229   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.403221   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:52:47.412407   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:52:47.421725   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425673   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425724   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.430774   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:52:47.442891   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:52:47.448916   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:52:47.448963   37715 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:47.449026   37715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 10:52:47.449081   37715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 10:52:47.493313   37715 cri.go:89] found id: ""
	I1104 10:52:47.493388   37715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 10:52:47.505853   37715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 10:52:47.514358   37715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 10:52:47.522614   37715 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 10:52:47.522633   37715 kubeadm.go:157] found existing configuration files:
	
	I1104 10:52:47.522685   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 10:52:47.530458   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 10:52:47.530497   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 10:52:47.538766   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 10:52:47.546614   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 10:52:47.546656   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 10:52:47.554873   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.562800   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 10:52:47.562860   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.571095   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 10:52:47.578946   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 10:52:47.578986   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 10:52:47.587002   37715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 10:52:47.774250   37715 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 10:52:59.162857   37715 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1104 10:52:59.162909   37715 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 10:52:59.162992   37715 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 10:52:59.163126   37715 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 10:52:59.163235   37715 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1104 10:52:59.163321   37715 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 10:52:59.164884   37715 out.go:235]   - Generating certificates and keys ...
	I1104 10:52:59.164965   37715 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 10:52:59.165051   37715 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 10:52:59.165154   37715 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 10:52:59.165262   37715 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 10:52:59.165355   37715 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 10:52:59.165433   37715 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 10:52:59.165512   37715 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 10:52:59.165644   37715 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165719   37715 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 10:52:59.165854   37715 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165939   37715 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 10:52:59.166039   37715 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 10:52:59.166120   37715 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 10:52:59.166198   37715 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 10:52:59.166277   37715 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 10:52:59.166352   37715 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1104 10:52:59.166437   37715 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 10:52:59.166524   37715 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 10:52:59.166602   37715 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 10:52:59.166715   37715 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 10:52:59.166813   37715 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 10:52:59.168314   37715 out.go:235]   - Booting up control plane ...
	I1104 10:52:59.168430   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 10:52:59.168528   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 10:52:59.168619   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 10:52:59.168745   37715 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 10:52:59.168864   37715 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 10:52:59.168907   37715 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 10:52:59.169020   37715 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1104 10:52:59.169142   37715 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1104 10:52:59.169244   37715 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501850183s
	I1104 10:52:59.169346   37715 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1104 10:52:59.169435   37715 kubeadm.go:310] [api-check] The API server is healthy after 5.721436597s
	I1104 10:52:59.169568   37715 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1104 10:52:59.169699   37715 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1104 10:52:59.169786   37715 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1104 10:52:59.169979   37715 kubeadm.go:310] [mark-control-plane] Marking the node ha-931571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1104 10:52:59.170060   37715 kubeadm.go:310] [bootstrap-token] Using token: x3krps.xtycqe6w7psx61o7
	I1104 10:52:59.171278   37715 out.go:235]   - Configuring RBAC rules ...
	I1104 10:52:59.171366   37715 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1104 10:52:59.171442   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1104 10:52:59.171566   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1104 10:52:59.171689   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1104 10:52:59.171828   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1104 10:52:59.171935   37715 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1104 10:52:59.172086   37715 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1104 10:52:59.172158   37715 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1104 10:52:59.172220   37715 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1104 10:52:59.172232   37715 kubeadm.go:310] 
	I1104 10:52:59.172322   37715 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1104 10:52:59.172332   37715 kubeadm.go:310] 
	I1104 10:52:59.172461   37715 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1104 10:52:59.172471   37715 kubeadm.go:310] 
	I1104 10:52:59.172512   37715 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1104 10:52:59.172591   37715 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1104 10:52:59.172657   37715 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1104 10:52:59.172671   37715 kubeadm.go:310] 
	I1104 10:52:59.172727   37715 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1104 10:52:59.172733   37715 kubeadm.go:310] 
	I1104 10:52:59.172772   37715 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1104 10:52:59.172780   37715 kubeadm.go:310] 
	I1104 10:52:59.172823   37715 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1104 10:52:59.172919   37715 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1104 10:52:59.173013   37715 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1104 10:52:59.173027   37715 kubeadm.go:310] 
	I1104 10:52:59.173126   37715 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1104 10:52:59.173242   37715 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1104 10:52:59.173250   37715 kubeadm.go:310] 
	I1104 10:52:59.173349   37715 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173475   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 \
	I1104 10:52:59.173512   37715 kubeadm.go:310] 	--control-plane 
	I1104 10:52:59.173521   37715 kubeadm.go:310] 
	I1104 10:52:59.173615   37715 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1104 10:52:59.173622   37715 kubeadm.go:310] 
	I1104 10:52:59.173728   37715 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173851   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 
	I1104 10:52:59.173864   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:59.173870   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:59.175270   37715 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1104 10:52:59.176515   37715 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1104 10:52:59.181311   37715 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1104 10:52:59.181330   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1104 10:52:59.199374   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1104 10:52:59.595605   37715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 10:52:59.595735   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:52:59.595746   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571 minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=true
	I1104 10:52:59.607016   37715 ops.go:34] apiserver oom_adj: -16
	I1104 10:52:59.726325   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.227237   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.727360   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.226637   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.727035   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.226405   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.727470   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.227029   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.337760   37715 kubeadm.go:1113] duration metric: took 3.742086638s to wait for elevateKubeSystemPrivileges
	I1104 10:53:03.337799   37715 kubeadm.go:394] duration metric: took 15.888837987s to StartCluster
	I1104 10:53:03.337821   37715 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.337905   37715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.338737   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.338982   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1104 10:53:03.338988   37715 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:03.339014   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:53:03.339062   37715 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 10:53:03.339167   37715 addons.go:69] Setting default-storageclass=true in profile "ha-931571"
	I1104 10:53:03.339173   37715 addons.go:69] Setting storage-provisioner=true in profile "ha-931571"
	I1104 10:53:03.339185   37715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-931571"
	I1104 10:53:03.339200   37715 addons.go:234] Setting addon storage-provisioner=true in "ha-931571"
	I1104 10:53:03.339229   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:03.339239   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.339632   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339672   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.339677   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339713   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.360893   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I1104 10:53:03.360926   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1104 10:53:03.361436   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361473   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361990   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362007   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362132   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362158   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362362   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362495   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362668   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.362891   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.362932   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.365045   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.365435   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 10:53:03.365987   37715 cert_rotation.go:140] Starting client certificate rotation controller
	I1104 10:53:03.366272   37715 addons.go:234] Setting addon default-storageclass=true in "ha-931571"
	I1104 10:53:03.366318   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.366699   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.366738   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.381218   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I1104 10:53:03.381322   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I1104 10:53:03.381713   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.381719   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.382205   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382227   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382357   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382372   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382534   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383016   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.383048   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.383535   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383708   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.385592   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.387622   37715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 10:53:03.388963   37715 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.388985   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 10:53:03.389004   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.392017   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392435   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.392480   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392570   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.392752   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.392874   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.393020   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.398269   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I1104 10:53:03.398748   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.399262   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.399294   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.399614   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.399786   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.401287   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.401486   37715 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.401502   37715 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 10:53:03.401529   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.404218   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404573   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.404595   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404677   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.404848   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.404981   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.405135   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.489842   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1104 10:53:03.554612   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.583845   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.952361   37715 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1104 10:53:03.952436   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952460   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952742   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952762   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.952762   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:03.952772   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952781   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952966   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952981   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.953045   37715 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1104 10:53:03.953065   37715 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1104 10:53:03.953164   37715 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1104 10:53:03.953175   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.953187   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.953195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.960797   37715 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1104 10:53:03.961342   37715 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1104 10:53:03.961355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.961363   37715 round_trippers.go:473]     Content-Type: application/json
	I1104 10:53:03.961367   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.961369   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.963493   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:03.963694   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.963715   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.964004   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.964021   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.964021   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.222705   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.222735   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223063   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.223090   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223120   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.223137   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.223149   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223361   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223375   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.225261   37715 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1104 10:53:04.226730   37715 addons.go:510] duration metric: took 887.697522ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1104 10:53:04.226762   37715 start.go:246] waiting for cluster config update ...
	I1104 10:53:04.226778   37715 start.go:255] writing updated cluster config ...
	I1104 10:53:04.228532   37715 out.go:201] 
	I1104 10:53:04.229911   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:04.229982   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.231623   37715 out.go:177] * Starting "ha-931571-m02" control-plane node in "ha-931571" cluster
	I1104 10:53:04.233345   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:53:04.233368   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:53:04.233465   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:53:04.233476   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:53:04.233547   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.233880   37715 start.go:360] acquireMachinesLock for ha-931571-m02: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:53:04.233922   37715 start.go:364] duration metric: took 22.549µs to acquireMachinesLock for "ha-931571-m02"
	I1104 10:53:04.233935   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:04.234001   37715 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1104 10:53:04.235719   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:53:04.235815   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:04.235858   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:04.250864   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I1104 10:53:04.251327   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:04.251891   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:04.251920   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:04.252265   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:04.252475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:04.252609   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:04.252797   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:53:04.252829   37715 client.go:168] LocalClient.Create starting
	I1104 10:53:04.252866   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:53:04.252907   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.252928   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.252995   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:53:04.253023   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.253038   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.253066   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:53:04.253077   37715 main.go:141] libmachine: (ha-931571-m02) Calling .PreCreateCheck
	I1104 10:53:04.253220   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:04.253654   37715 main.go:141] libmachine: Creating machine...
	I1104 10:53:04.253672   37715 main.go:141] libmachine: (ha-931571-m02) Calling .Create
	I1104 10:53:04.253800   37715 main.go:141] libmachine: (ha-931571-m02) Creating KVM machine...
	I1104 10:53:04.254992   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing default KVM network
	I1104 10:53:04.255150   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing private KVM network mk-ha-931571
	I1104 10:53:04.255299   37715 main.go:141] libmachine: (ha-931571-m02) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.255322   37715 main.go:141] libmachine: (ha-931571-m02) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:53:04.255385   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.255280   38069 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.255479   37715 main.go:141] libmachine: (ha-931571-m02) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:53:04.500647   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.500534   38069 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa...
	I1104 10:53:04.797066   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.796939   38069 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk...
	I1104 10:53:04.797094   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing magic tar header
	I1104 10:53:04.797104   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing SSH key tar header
	I1104 10:53:04.797111   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.797059   38069 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.797220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02
	I1104 10:53:04.797261   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 (perms=drwx------)
	I1104 10:53:04.797271   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:53:04.797289   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.797298   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:53:04.797310   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:53:04.797318   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:53:04.797331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home
	I1104 10:53:04.797349   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:53:04.797357   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Skipping /home - not owner
	I1104 10:53:04.797376   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:53:04.797389   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:53:04.797401   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:53:04.797412   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:53:04.797440   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:04.798407   37715 main.go:141] libmachine: (ha-931571-m02) define libvirt domain using xml: 
	I1104 10:53:04.798425   37715 main.go:141] libmachine: (ha-931571-m02) <domain type='kvm'>
	I1104 10:53:04.798436   37715 main.go:141] libmachine: (ha-931571-m02)   <name>ha-931571-m02</name>
	I1104 10:53:04.798449   37715 main.go:141] libmachine: (ha-931571-m02)   <memory unit='MiB'>2200</memory>
	I1104 10:53:04.798465   37715 main.go:141] libmachine: (ha-931571-m02)   <vcpu>2</vcpu>
	I1104 10:53:04.798472   37715 main.go:141] libmachine: (ha-931571-m02)   <features>
	I1104 10:53:04.798477   37715 main.go:141] libmachine: (ha-931571-m02)     <acpi/>
	I1104 10:53:04.798481   37715 main.go:141] libmachine: (ha-931571-m02)     <apic/>
	I1104 10:53:04.798486   37715 main.go:141] libmachine: (ha-931571-m02)     <pae/>
	I1104 10:53:04.798492   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798498   37715 main.go:141] libmachine: (ha-931571-m02)   </features>
	I1104 10:53:04.798502   37715 main.go:141] libmachine: (ha-931571-m02)   <cpu mode='host-passthrough'>
	I1104 10:53:04.798507   37715 main.go:141] libmachine: (ha-931571-m02)   
	I1104 10:53:04.798512   37715 main.go:141] libmachine: (ha-931571-m02)   </cpu>
	I1104 10:53:04.798522   37715 main.go:141] libmachine: (ha-931571-m02)   <os>
	I1104 10:53:04.798534   37715 main.go:141] libmachine: (ha-931571-m02)     <type>hvm</type>
	I1104 10:53:04.798546   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='cdrom'/>
	I1104 10:53:04.798552   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='hd'/>
	I1104 10:53:04.798564   37715 main.go:141] libmachine: (ha-931571-m02)     <bootmenu enable='no'/>
	I1104 10:53:04.798571   37715 main.go:141] libmachine: (ha-931571-m02)   </os>
	I1104 10:53:04.798580   37715 main.go:141] libmachine: (ha-931571-m02)   <devices>
	I1104 10:53:04.798585   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='cdrom'>
	I1104 10:53:04.798596   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/boot2docker.iso'/>
	I1104 10:53:04.798601   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hdc' bus='scsi'/>
	I1104 10:53:04.798630   37715 main.go:141] libmachine: (ha-931571-m02)       <readonly/>
	I1104 10:53:04.798653   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798678   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='disk'>
	I1104 10:53:04.798702   37715 main.go:141] libmachine: (ha-931571-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:53:04.798718   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk'/>
	I1104 10:53:04.798732   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hda' bus='virtio'/>
	I1104 10:53:04.798747   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798763   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798783   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='mk-ha-931571'/>
	I1104 10:53:04.798799   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798811   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798822   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798835   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='default'/>
	I1104 10:53:04.798846   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798858   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798868   37715 main.go:141] libmachine: (ha-931571-m02)     <serial type='pty'>
	I1104 10:53:04.798881   37715 main.go:141] libmachine: (ha-931571-m02)       <target port='0'/>
	I1104 10:53:04.798892   37715 main.go:141] libmachine: (ha-931571-m02)     </serial>
	I1104 10:53:04.798901   37715 main.go:141] libmachine: (ha-931571-m02)     <console type='pty'>
	I1104 10:53:04.798910   37715 main.go:141] libmachine: (ha-931571-m02)       <target type='serial' port='0'/>
	I1104 10:53:04.798916   37715 main.go:141] libmachine: (ha-931571-m02)     </console>
	I1104 10:53:04.798925   37715 main.go:141] libmachine: (ha-931571-m02)     <rng model='virtio'>
	I1104 10:53:04.798938   37715 main.go:141] libmachine: (ha-931571-m02)       <backend model='random'>/dev/random</backend>
	I1104 10:53:04.798948   37715 main.go:141] libmachine: (ha-931571-m02)     </rng>
	I1104 10:53:04.798958   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798967   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798977   37715 main.go:141] libmachine: (ha-931571-m02)   </devices>
	I1104 10:53:04.798990   37715 main.go:141] libmachine: (ha-931571-m02) </domain>
	I1104 10:53:04.799001   37715 main.go:141] libmachine: (ha-931571-m02) 
	I1104 10:53:04.805977   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5e:b4:47 in network default
	I1104 10:53:04.806519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:04.806536   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring networks are active...
	I1104 10:53:04.807291   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network default is active
	I1104 10:53:04.807614   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network mk-ha-931571 is active
	I1104 10:53:04.807998   37715 main.go:141] libmachine: (ha-931571-m02) Getting domain xml...
	I1104 10:53:04.808751   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:06.037689   37715 main.go:141] libmachine: (ha-931571-m02) Waiting to get IP...
	I1104 10:53:06.038416   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.038827   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.038856   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.038804   38069 retry.go:31] will retry after 244.727015ms: waiting for machine to come up
	I1104 10:53:06.285395   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.285853   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.285879   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.285815   38069 retry.go:31] will retry after 291.944786ms: waiting for machine to come up
	I1104 10:53:06.579413   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.579939   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.579964   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.579896   38069 retry.go:31] will retry after 446.911163ms: waiting for machine to come up
	I1104 10:53:07.028452   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.028838   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.028870   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.028792   38069 retry.go:31] will retry after 472.390697ms: waiting for machine to come up
	I1104 10:53:07.502204   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.502568   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.502592   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.502526   38069 retry.go:31] will retry after 662.15145ms: waiting for machine to come up
	I1104 10:53:08.166152   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:08.166583   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:08.166609   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:08.166538   38069 retry.go:31] will retry after 886.374206ms: waiting for machine to come up
	I1104 10:53:09.054240   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:09.054689   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:09.054715   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:09.054670   38069 retry.go:31] will retry after 963.475989ms: waiting for machine to come up
	I1104 10:53:10.020142   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:10.020587   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:10.020630   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:10.020571   38069 retry.go:31] will retry after 1.332433034s: waiting for machine to come up
	I1104 10:53:11.354908   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:11.355309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:11.355331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:11.355273   38069 retry.go:31] will retry after 1.652203867s: waiting for machine to come up
	I1104 10:53:13.009876   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:13.010297   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:13.010319   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:13.010254   38069 retry.go:31] will retry after 2.320402176s: waiting for machine to come up
	I1104 10:53:15.332045   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:15.332414   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:15.332441   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:15.332356   38069 retry.go:31] will retry after 2.652871808s: waiting for machine to come up
	I1104 10:53:17.987774   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:17.988211   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:17.988231   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:17.988174   38069 retry.go:31] will retry after 3.518414185s: waiting for machine to come up
	I1104 10:53:21.508515   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:21.508901   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:21.508926   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:21.508866   38069 retry.go:31] will retry after 4.345855832s: waiting for machine to come up
	I1104 10:53:25.856753   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857143   37715 main.go:141] libmachine: (ha-931571-m02) Found IP for machine: 192.168.39.245
	I1104 10:53:25.857167   37715 main.go:141] libmachine: (ha-931571-m02) Reserving static IP address...
	I1104 10:53:25.857181   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857621   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find host DHCP lease matching {name: "ha-931571-m02", mac: "52:54:00:5c:86:6b", ip: "192.168.39.245"} in network mk-ha-931571
	I1104 10:53:25.931250   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Getting to WaitForSSH function...
	I1104 10:53:25.931278   37715 main.go:141] libmachine: (ha-931571-m02) Reserved static IP address: 192.168.39.245
	I1104 10:53:25.931296   37715 main.go:141] libmachine: (ha-931571-m02) Waiting for SSH to be available...
	I1104 10:53:25.933968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934431   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:25.934489   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934562   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH client type: external
	I1104 10:53:25.934591   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa (-rw-------)
	I1104 10:53:25.934652   37715 main.go:141] libmachine: (ha-931571-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:53:25.934674   37715 main.go:141] libmachine: (ha-931571-m02) DBG | About to run SSH command:
	I1104 10:53:25.934692   37715 main.go:141] libmachine: (ha-931571-m02) DBG | exit 0
	I1104 10:53:26.068913   37715 main.go:141] libmachine: (ha-931571-m02) DBG | SSH cmd err, output: <nil>: 
	I1104 10:53:26.069182   37715 main.go:141] libmachine: (ha-931571-m02) KVM machine creation complete!
	I1104 10:53:26.069569   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:26.070061   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070245   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070421   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:53:26.070438   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 10:53:26.071961   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:53:26.071975   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:53:26.071980   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:53:26.071985   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.074060   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074383   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.074403   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074574   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.074737   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074878   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074976   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.075126   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.075361   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.075377   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:53:26.184350   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.184379   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:53:26.184395   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.186866   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187176   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.187196   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187362   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.187546   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187699   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187825   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.187985   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.188193   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.188204   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:53:26.301614   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:53:26.301685   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:53:26.301699   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:53:26.301711   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.301942   37715 buildroot.go:166] provisioning hostname "ha-931571-m02"
	I1104 10:53:26.301964   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.302139   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.304767   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.305334   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305470   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.305626   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305790   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305931   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.306093   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.306297   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.306310   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m02 && echo "ha-931571-m02" | sudo tee /etc/hostname
	I1104 10:53:26.430814   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m02
	
	I1104 10:53:26.430842   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.433622   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.433925   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.433953   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.434109   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.434330   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434473   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434584   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.434716   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.434907   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.434931   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:53:26.553495   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.553519   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:53:26.553534   37715 buildroot.go:174] setting up certificates
	I1104 10:53:26.553543   37715 provision.go:84] configureAuth start
	I1104 10:53:26.553551   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.553773   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:26.556203   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556500   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.556519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556610   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.558806   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559168   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.559194   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559467   37715 provision.go:143] copyHostCerts
	I1104 10:53:26.559496   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559535   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:53:26.559546   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559623   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:53:26.559707   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559732   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:53:26.559741   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559778   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:53:26.559830   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559853   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:53:26.559865   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559899   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:53:26.559968   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m02 san=[127.0.0.1 192.168.39.245 ha-931571-m02 localhost minikube]
	I1104 10:53:26.827173   37715 provision.go:177] copyRemoteCerts
	I1104 10:53:26.827226   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:53:26.827248   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.829975   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830343   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.830372   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830576   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.830763   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.830912   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.831022   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:26.923318   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:53:26.923390   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:53:26.950708   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:53:26.950773   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:53:26.976975   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:53:26.977045   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 10:53:27.002230   37715 provision.go:87] duration metric: took 448.676469ms to configureAuth
	I1104 10:53:27.002252   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:53:27.002404   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:27.002475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.005273   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005618   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.005646   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005772   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.005978   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006123   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006279   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.006465   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.006627   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.006641   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:53:27.235271   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:53:27.235297   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:53:27.235305   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetURL
	I1104 10:53:27.236550   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using libvirt version 6000000
	I1104 10:53:27.238826   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239189   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.239220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239401   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:53:27.239418   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:53:27.239426   37715 client.go:171] duration metric: took 22.986586779s to LocalClient.Create
	I1104 10:53:27.239451   37715 start.go:167] duration metric: took 22.986656312s to libmachine.API.Create "ha-931571"
	I1104 10:53:27.239472   37715 start.go:293] postStartSetup for "ha-931571-m02" (driver="kvm2")
	I1104 10:53:27.239488   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:53:27.239510   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.239721   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:53:27.239747   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.241968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242332   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.242352   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242491   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.242658   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.242769   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.242872   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.327061   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:53:27.331021   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:53:27.331050   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:53:27.331133   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:53:27.331207   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:53:27.331218   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:53:27.331300   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:53:27.341280   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:27.363737   37715 start.go:296] duration metric: took 124.248011ms for postStartSetup
	I1104 10:53:27.363783   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:27.364431   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.367195   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367660   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.367698   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367926   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:27.368121   37715 start.go:128] duration metric: took 23.134111471s to createHost
	I1104 10:53:27.368147   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.370510   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.370846   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.370881   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.371043   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.371226   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371432   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371573   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.371728   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.371899   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.371912   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:53:27.485557   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717607.449108710
	
	I1104 10:53:27.485578   37715 fix.go:216] guest clock: 1730717607.449108710
	I1104 10:53:27.485585   37715 fix.go:229] Guest: 2024-11-04 10:53:27.44910871 +0000 UTC Remote: 2024-11-04 10:53:27.368133628 +0000 UTC m=+66.039651871 (delta=80.975082ms)
	I1104 10:53:27.485600   37715 fix.go:200] guest clock delta is within tolerance: 80.975082ms
	I1104 10:53:27.485605   37715 start.go:83] releasing machines lock for "ha-931571-m02", held for 23.251676872s
	I1104 10:53:27.485620   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.485857   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.488648   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.489014   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.489041   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.491305   37715 out.go:177] * Found network options:
	I1104 10:53:27.492602   37715 out.go:177]   - NO_PROXY=192.168.39.67
	W1104 10:53:27.493715   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.493752   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494253   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494447   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494556   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:53:27.494595   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	W1104 10:53:27.494597   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.494657   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:53:27.494679   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.497460   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497637   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497850   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.497871   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497991   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.498003   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.498025   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498232   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498254   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498403   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498437   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498538   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.498550   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498773   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.735755   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:53:27.742047   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:53:27.742118   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:53:27.757546   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:53:27.757568   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:53:27.757654   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:53:27.775341   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:53:27.789267   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:53:27.789322   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:53:27.802395   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:53:27.815846   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:53:27.932464   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:53:28.072054   37715 docker.go:233] disabling docker service ...
	I1104 10:53:28.072113   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:53:28.085955   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:53:28.098515   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:53:28.231393   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:53:28.348075   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:53:28.360668   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:53:28.377621   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:53:28.377680   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.387614   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:53:28.387678   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.397527   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.406950   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.416691   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:53:28.426696   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.436536   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.452706   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.462377   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:53:28.471479   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:53:28.471541   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:53:28.484536   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:53:28.493914   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:28.602971   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:53:28.692433   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:53:28.692522   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:53:28.696783   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:53:28.696822   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:53:28.700013   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:53:28.734056   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:53:28.734128   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.760475   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.789783   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:53:28.791233   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:53:28.792582   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:28.795120   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795494   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:28.795520   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795759   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:53:28.799797   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:28.811896   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:53:28.812115   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:28.812360   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.812401   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.826717   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I1104 10:53:28.827181   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.827674   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.827693   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.828004   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.828173   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:28.829698   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:28.829978   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.830013   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.844302   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I1104 10:53:28.844715   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.845157   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.845180   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.845561   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.845729   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:28.845886   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.245
	I1104 10:53:28.845896   37715 certs.go:194] generating shared ca certs ...
	I1104 10:53:28.845908   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.846013   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:53:28.846050   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:53:28.846056   37715 certs.go:256] generating profile certs ...
	I1104 10:53:28.846117   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:53:28.846138   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a
	I1104 10:53:28.846149   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.254]
	I1104 10:53:28.973533   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a ...
	I1104 10:53:28.973558   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a: {Name:mk251fe01c9791f2c1df00673ac1979d7532e3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973716   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a ...
	I1104 10:53:28.973729   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a: {Name:mkef3dc2affbfe3d37549d8d043a12581b7267b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973806   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:53:28.973935   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:53:28.974053   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:53:28.974067   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:53:28.974079   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:53:28.974092   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:53:28.974103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:53:28.974114   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:53:28.974127   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:53:28.974139   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:53:28.974151   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:53:28.974191   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:53:28.974219   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:53:28.974228   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:53:28.974249   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:53:28.974273   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:53:28.974294   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:53:28.974329   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:28.974353   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:53:28.974366   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:53:28.974379   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:28.974408   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:28.977338   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977742   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:28.977776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:28.978138   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:28.978269   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:28.978403   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:29.049594   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:53:29.054655   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:53:29.065445   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:53:29.070822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:53:29.082304   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:53:29.086563   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:53:29.098922   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:53:29.103085   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:53:29.113035   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:53:29.117456   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:53:29.127764   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:53:29.131629   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:53:29.143522   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:53:29.167376   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:53:29.189625   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:53:29.212768   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:53:29.235967   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1104 10:53:29.263247   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:53:29.285302   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:53:29.306703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:53:29.328748   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:53:29.350648   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:53:29.372264   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:53:29.395406   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:53:29.410777   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:53:29.427042   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:53:29.443978   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:53:29.460125   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:53:29.475628   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:53:29.491185   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:53:29.507040   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:53:29.512376   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:53:29.522746   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526894   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526950   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.532557   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:53:29.543248   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:53:29.553302   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557429   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557475   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.562752   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:53:29.573585   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:53:29.583479   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587879   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587928   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.594267   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:53:29.605746   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:53:29.609628   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:53:29.609689   37715 kubeadm.go:934] updating node {m02 192.168.39.245 8443 v1.31.2 crio true true} ...
	I1104 10:53:29.609774   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:53:29.609799   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:53:29.609830   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:53:29.626833   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:53:29.626905   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:53:29.626952   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.636985   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:53:29.637050   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.646235   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:53:29.646266   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.646297   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1104 10:53:29.646318   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1104 10:53:29.646321   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.650548   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:53:29.650575   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:53:30.395926   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.396007   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.400715   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:53:30.400746   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:53:30.426541   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:53:30.447212   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.447328   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.458650   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:53:30.458689   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:53:30.919365   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:53:30.928897   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1104 10:53:30.946677   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:53:30.963726   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:53:30.981653   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:53:30.985571   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:30.998898   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:31.132385   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:31.149804   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:31.150291   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:31.150345   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:31.165094   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I1104 10:53:31.165587   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:31.166163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:31.166186   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:31.166555   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:31.166779   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:31.166958   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:53:31.167051   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:53:31.167067   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:31.169771   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170152   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:31.170182   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170376   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:31.170562   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:31.170687   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:31.170781   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:31.306325   37715 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:31.306377   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I1104 10:53:52.004440   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (20.698039868s)
	I1104 10:53:52.004481   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:53:52.565954   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m02 minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:53:52.722802   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:53:52.847701   37715 start.go:319] duration metric: took 21.680738209s to joinCluster
	I1104 10:53:52.847788   37715 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:52.848131   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:52.849508   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:53:52.850857   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:53.114403   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:53.138620   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:53.138881   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:53:53.138942   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:53:53.139141   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:53:53.139247   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.139257   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.139269   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.139278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.152136   37715 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1104 10:53:53.639369   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.639392   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.639401   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.639405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.643203   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:54.140047   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.140070   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.140084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.147092   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:53:54.639335   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.639355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.639363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.639367   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.642506   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.140245   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.140265   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.140273   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.140277   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.143824   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.144458   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:55.639804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.639830   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.639841   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.639846   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.643096   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:56.140054   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.140078   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.140095   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.142960   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:56.639891   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.639912   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.639923   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.639928   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.642755   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.139690   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.139713   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.139725   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.139730   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.143324   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:57.639441   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.639460   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.639469   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.639473   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.642433   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.642947   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:58.140368   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.140388   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.140399   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.140404   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.144117   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:58.640193   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.640215   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.640223   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.640227   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.643667   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.139304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.139323   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.139331   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.139335   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.142878   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.639323   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.639344   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.639353   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.639357   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.642391   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.140288   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.140323   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.140328   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.143357   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.143948   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:00.639324   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.639348   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.639358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.639365   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.643179   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.140315   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.140337   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.140345   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.140349   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.143491   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.639485   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.639510   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.639517   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.639522   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.642450   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:02.140259   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.140291   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.140299   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.140304   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.143695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:02.144128   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:02.639414   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.639433   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.639442   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.639447   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.642409   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.140294   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.140327   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.140333   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.143301   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.639404   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.639426   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.639437   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.639445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.642367   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.139716   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.139740   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.139750   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.139754   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.143000   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:04.640219   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.640245   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.640256   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.640262   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.643232   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.643667   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:05.140138   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.140162   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.140173   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.140178   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.142993   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:05.639755   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.639775   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.639783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.639802   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.643475   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.139372   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.139394   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.139402   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.139405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.142509   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.639413   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.639442   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.639451   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.639456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.642592   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.139655   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.139684   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.139694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.139699   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.143170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.143728   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:07.640208   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.640228   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.640235   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.640240   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.643154   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.140228   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.140261   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.140273   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.140278   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.142997   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.639828   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.639854   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.639862   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.639866   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.643244   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.140126   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.140153   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.140166   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.140172   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.143278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.143950   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:09.639588   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.639610   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.639618   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.639623   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.642343   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.139875   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.139898   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.139905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.139909   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.143037   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.640013   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.640033   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.640042   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.640045   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.643833   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.644423   37715 node_ready.go:49] node "ha-931571-m02" has status "Ready":"True"
	I1104 10:54:10.644446   37715 node_ready.go:38] duration metric: took 17.505281339s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:54:10.644459   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:10.644564   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:10.644577   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.644587   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.644591   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.649476   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:10.656031   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.656110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:54:10.656129   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.656138   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.656144   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.659282   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.659928   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.659944   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.659953   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.659958   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.662844   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.663378   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.663402   37715 pod_ready.go:82] duration metric: took 7.344091ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663423   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663492   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:54:10.663502   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.663512   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.663521   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.666287   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.666934   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.666950   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.666957   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.666960   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.669169   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.669739   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.669760   37715 pod_ready.go:82] duration metric: took 6.3295ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669770   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669830   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:54:10.669842   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.669852   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.669859   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.672042   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.672626   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.672642   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.672650   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.672653   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.674766   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.675295   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.675317   37715 pod_ready.go:82] duration metric: took 5.539368ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675329   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675390   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:54:10.675398   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.675405   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.675410   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.677591   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.678184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.678197   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.678204   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.678208   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.680155   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:54:10.680700   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.680721   37715 pod_ready.go:82] duration metric: took 5.381074ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.680737   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.840055   37715 request.go:632] Waited for 159.25235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840140   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840150   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.840160   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.840171   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.843356   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.040534   37715 request.go:632] Waited for 196.430173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040604   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040615   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.040623   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.040630   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.043768   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.044382   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.044403   37715 pod_ready.go:82] duration metric: took 363.65714ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.044412   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.240746   37715 request.go:632] Waited for 196.265081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240800   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240805   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.240812   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.240823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.244055   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.441020   37715 request.go:632] Waited for 196.31895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441076   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441082   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.441089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.441092   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.443940   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.444396   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.444417   37715 pod_ready.go:82] duration metric: took 399.997294ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.444431   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.640978   37715 request.go:632] Waited for 196.455451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641045   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641052   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.641063   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.641068   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.644104   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.840124   37715 request.go:632] Waited for 195.279381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840180   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.840189   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.840204   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.843139   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.843784   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.843806   37715 pod_ready.go:82] duration metric: took 399.367004ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.843816   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.040826   37715 request.go:632] Waited for 196.934959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040888   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040896   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.040905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.040912   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.044321   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.240220   37715 request.go:632] Waited for 195.323321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240302   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.240311   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.240340   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.243972   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.244423   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.244441   37715 pod_ready.go:82] duration metric: took 400.61624ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.244452   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.440627   37715 request.go:632] Waited for 196.096769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440687   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440692   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.440700   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.440704   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.443759   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.640675   37715 request.go:632] Waited for 196.368451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640746   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640753   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.640764   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.640771   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.645533   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:12.646078   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.646098   37715 pod_ready.go:82] duration metric: took 401.639494ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.646111   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.840342   37715 request.go:632] Waited for 194.16235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840395   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840400   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.840407   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.840413   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.844505   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:13.040627   37715 request.go:632] Waited for 195.405277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040697   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040706   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.040713   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.040717   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.043654   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.044440   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.044461   37715 pod_ready.go:82] duration metric: took 398.343689ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.044472   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.240500   37715 request.go:632] Waited for 195.966375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240580   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240589   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.240599   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.240606   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.243607   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.440419   37715 request.go:632] Waited for 196.059783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440489   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440495   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.440502   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.440507   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.443953   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.444535   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.444560   37715 pod_ready.go:82] duration metric: took 400.080635ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.444575   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.640646   37715 request.go:632] Waited for 195.95641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640702   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640707   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.640716   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.640720   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.644170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.840111   37715 request.go:632] Waited for 195.309512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840189   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.840197   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.840205   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.843622   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.844295   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.844319   37715 pod_ready.go:82] duration metric: took 399.734957ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.844333   37715 pod_ready.go:39] duration metric: took 3.199846594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:13.844350   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:54:13.844417   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:54:13.858847   37715 api_server.go:72] duration metric: took 21.011018077s to wait for apiserver process to appear ...
	I1104 10:54:13.858869   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:54:13.858890   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:54:13.863051   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:54:13.863110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:54:13.863115   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.863122   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.863126   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.864098   37715 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1104 10:54:13.864181   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:54:13.864195   37715 api_server.go:131] duration metric: took 5.319439ms to wait for apiserver health ...
	I1104 10:54:13.864202   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:54:14.040623   37715 request.go:632] Waited for 176.353381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040696   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040702   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.040709   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.040714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.045262   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.050254   37715 system_pods.go:59] 17 kube-system pods found
	I1104 10:54:14.050280   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.050285   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.050289   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.050292   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.050296   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.050301   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.050305   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.050310   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.050315   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.050320   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.050327   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.050332   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.050340   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.050345   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.050354   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050364   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050370   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.050377   37715 system_pods.go:74] duration metric: took 186.169669ms to wait for pod list to return data ...
	I1104 10:54:14.050387   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:54:14.240854   37715 request.go:632] Waited for 190.370277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240922   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240929   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.240940   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.240963   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.244687   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.244932   37715 default_sa.go:45] found service account: "default"
	I1104 10:54:14.244952   37715 default_sa.go:55] duration metric: took 194.560071ms for default service account to be created ...
	I1104 10:54:14.244961   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:54:14.440692   37715 request.go:632] Waited for 195.67345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440751   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440757   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.440772   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.440780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.444830   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.449745   37715 system_pods.go:86] 17 kube-system pods found
	I1104 10:54:14.449772   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.449778   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.449783   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.449789   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.449795   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.449800   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.449807   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.449812   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.449816   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.449821   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.449826   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.449834   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.449839   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.449848   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.449857   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449870   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449878   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.449891   37715 system_pods.go:126] duration metric: took 204.923702ms to wait for k8s-apps to be running ...
	I1104 10:54:14.449903   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:54:14.449956   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:14.464950   37715 system_svc.go:56] duration metric: took 15.038755ms WaitForService to wait for kubelet
	I1104 10:54:14.464983   37715 kubeadm.go:582] duration metric: took 21.617159665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:54:14.465005   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:54:14.640444   37715 request.go:632] Waited for 175.359531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640495   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640507   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.640514   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.640531   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.644308   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.645138   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645162   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645172   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645175   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645180   37715 node_conditions.go:105] duration metric: took 180.169842ms to run NodePressure ...
	I1104 10:54:14.645191   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:54:14.645220   37715 start.go:255] writing updated cluster config ...
	I1104 10:54:14.647434   37715 out.go:201] 
	I1104 10:54:14.649030   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:14.649124   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.650881   37715 out.go:177] * Starting "ha-931571-m03" control-plane node in "ha-931571" cluster
	I1104 10:54:14.652021   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:54:14.652041   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:54:14.652128   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:54:14.652138   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:54:14.652229   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.652384   37715 start.go:360] acquireMachinesLock for ha-931571-m03: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:54:14.652421   37715 start.go:364] duration metric: took 20.345µs to acquireMachinesLock for "ha-931571-m03"
	I1104 10:54:14.652439   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:14.652552   37715 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1104 10:54:14.653932   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:54:14.654009   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:14.654042   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:14.669012   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1104 10:54:14.669516   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:14.669968   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:14.669986   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:14.670370   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:14.670550   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:14.670697   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:14.670887   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:54:14.670919   37715 client.go:168] LocalClient.Create starting
	I1104 10:54:14.670952   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:54:14.670990   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671004   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671047   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:54:14.671066   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671074   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671092   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:54:14.671100   37715 main.go:141] libmachine: (ha-931571-m03) Calling .PreCreateCheck
	I1104 10:54:14.671295   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:14.671735   37715 main.go:141] libmachine: Creating machine...
	I1104 10:54:14.671748   37715 main.go:141] libmachine: (ha-931571-m03) Calling .Create
	I1104 10:54:14.671896   37715 main.go:141] libmachine: (ha-931571-m03) Creating KVM machine...
	I1104 10:54:14.673127   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing default KVM network
	I1104 10:54:14.673275   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing private KVM network mk-ha-931571
	I1104 10:54:14.673433   37715 main.go:141] libmachine: (ha-931571-m03) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:14.673458   37715 main.go:141] libmachine: (ha-931571-m03) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:54:14.673532   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.673413   38465 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:14.673618   37715 main.go:141] libmachine: (ha-931571-m03) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:54:14.913416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.913288   38465 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa...
	I1104 10:54:15.078787   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078642   38465 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk...
	I1104 10:54:15.078832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing magic tar header
	I1104 10:54:15.078845   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing SSH key tar header
	I1104 10:54:15.078858   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078756   38465 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:15.078874   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03
	I1104 10:54:15.078881   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:54:15.078888   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 (perms=drwx------)
	I1104 10:54:15.078896   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:54:15.078902   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:54:15.078911   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:54:15.078919   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:54:15.078931   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:54:15.078951   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:15.078968   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:15.078978   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:54:15.078985   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:54:15.078991   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:54:15.078997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home
	I1104 10:54:15.079003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Skipping /home - not owner
	I1104 10:54:15.079942   37715 main.go:141] libmachine: (ha-931571-m03) define libvirt domain using xml: 
	I1104 10:54:15.079975   37715 main.go:141] libmachine: (ha-931571-m03) <domain type='kvm'>
	I1104 10:54:15.079986   37715 main.go:141] libmachine: (ha-931571-m03)   <name>ha-931571-m03</name>
	I1104 10:54:15.079997   37715 main.go:141] libmachine: (ha-931571-m03)   <memory unit='MiB'>2200</memory>
	I1104 10:54:15.080003   37715 main.go:141] libmachine: (ha-931571-m03)   <vcpu>2</vcpu>
	I1104 10:54:15.080007   37715 main.go:141] libmachine: (ha-931571-m03)   <features>
	I1104 10:54:15.080011   37715 main.go:141] libmachine: (ha-931571-m03)     <acpi/>
	I1104 10:54:15.080015   37715 main.go:141] libmachine: (ha-931571-m03)     <apic/>
	I1104 10:54:15.080020   37715 main.go:141] libmachine: (ha-931571-m03)     <pae/>
	I1104 10:54:15.080024   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080028   37715 main.go:141] libmachine: (ha-931571-m03)   </features>
	I1104 10:54:15.080032   37715 main.go:141] libmachine: (ha-931571-m03)   <cpu mode='host-passthrough'>
	I1104 10:54:15.080037   37715 main.go:141] libmachine: (ha-931571-m03)   
	I1104 10:54:15.080040   37715 main.go:141] libmachine: (ha-931571-m03)   </cpu>
	I1104 10:54:15.080045   37715 main.go:141] libmachine: (ha-931571-m03)   <os>
	I1104 10:54:15.080049   37715 main.go:141] libmachine: (ha-931571-m03)     <type>hvm</type>
	I1104 10:54:15.080054   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='cdrom'/>
	I1104 10:54:15.080061   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='hd'/>
	I1104 10:54:15.080066   37715 main.go:141] libmachine: (ha-931571-m03)     <bootmenu enable='no'/>
	I1104 10:54:15.080070   37715 main.go:141] libmachine: (ha-931571-m03)   </os>
	I1104 10:54:15.080075   37715 main.go:141] libmachine: (ha-931571-m03)   <devices>
	I1104 10:54:15.080079   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='cdrom'>
	I1104 10:54:15.080088   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/boot2docker.iso'/>
	I1104 10:54:15.080096   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hdc' bus='scsi'/>
	I1104 10:54:15.080101   37715 main.go:141] libmachine: (ha-931571-m03)       <readonly/>
	I1104 10:54:15.080106   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080111   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='disk'>
	I1104 10:54:15.080119   37715 main.go:141] libmachine: (ha-931571-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:54:15.080127   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk'/>
	I1104 10:54:15.080134   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hda' bus='virtio'/>
	I1104 10:54:15.080145   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080149   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080154   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='mk-ha-931571'/>
	I1104 10:54:15.080163   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080168   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080172   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080177   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='default'/>
	I1104 10:54:15.080181   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080186   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080191   37715 main.go:141] libmachine: (ha-931571-m03)     <serial type='pty'>
	I1104 10:54:15.080196   37715 main.go:141] libmachine: (ha-931571-m03)       <target port='0'/>
	I1104 10:54:15.080200   37715 main.go:141] libmachine: (ha-931571-m03)     </serial>
	I1104 10:54:15.080205   37715 main.go:141] libmachine: (ha-931571-m03)     <console type='pty'>
	I1104 10:54:15.080209   37715 main.go:141] libmachine: (ha-931571-m03)       <target type='serial' port='0'/>
	I1104 10:54:15.080214   37715 main.go:141] libmachine: (ha-931571-m03)     </console>
	I1104 10:54:15.080218   37715 main.go:141] libmachine: (ha-931571-m03)     <rng model='virtio'>
	I1104 10:54:15.080224   37715 main.go:141] libmachine: (ha-931571-m03)       <backend model='random'>/dev/random</backend>
	I1104 10:54:15.080230   37715 main.go:141] libmachine: (ha-931571-m03)     </rng>
	I1104 10:54:15.080236   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080243   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080248   37715 main.go:141] libmachine: (ha-931571-m03)   </devices>
	I1104 10:54:15.080254   37715 main.go:141] libmachine: (ha-931571-m03) </domain>
	I1104 10:54:15.080261   37715 main.go:141] libmachine: (ha-931571-m03) 
	I1104 10:54:15.087034   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:1d:68:f5 in network default
	I1104 10:54:15.087544   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring networks are active...
	I1104 10:54:15.087568   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:15.088354   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network default is active
	I1104 10:54:15.088653   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network mk-ha-931571 is active
	I1104 10:54:15.089053   37715 main.go:141] libmachine: (ha-931571-m03) Getting domain xml...
	I1104 10:54:15.089835   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:16.314267   37715 main.go:141] libmachine: (ha-931571-m03) Waiting to get IP...
	I1104 10:54:16.315295   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.315802   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.315837   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.315784   38465 retry.go:31] will retry after 211.49676ms: waiting for machine to come up
	I1104 10:54:16.528417   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.528897   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.528927   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.528846   38465 retry.go:31] will retry after 340.441068ms: waiting for machine to come up
	I1104 10:54:16.871525   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.871971   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.871997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.871910   38465 retry.go:31] will retry after 446.439393ms: waiting for machine to come up
	I1104 10:54:17.319543   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.320106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.320137   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.320042   38465 retry.go:31] will retry after 381.839641ms: waiting for machine to come up
	I1104 10:54:17.703288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.703811   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.703840   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.703750   38465 retry.go:31] will retry after 593.813893ms: waiting for machine to come up
	I1104 10:54:18.299510   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:18.300023   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:18.300055   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:18.299939   38465 retry.go:31] will retry after 849.789348ms: waiting for machine to come up
	I1104 10:54:19.151490   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:19.151964   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:19.151988   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:19.151922   38465 retry.go:31] will retry after 1.150337712s: waiting for machine to come up
	I1104 10:54:20.303915   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:20.304325   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:20.304357   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:20.304278   38465 retry.go:31] will retry after 1.472559033s: waiting for machine to come up
	I1104 10:54:21.778305   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:21.778784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:21.778810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:21.778723   38465 retry.go:31] will retry after 1.37004444s: waiting for machine to come up
	I1104 10:54:23.150404   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:23.150868   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:23.150895   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:23.150820   38465 retry.go:31] will retry after 1.893583796s: waiting for machine to come up
	I1104 10:54:25.045832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:25.046288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:25.046327   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:25.046279   38465 retry.go:31] will retry after 2.056345872s: waiting for machine to come up
	I1104 10:54:27.105382   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:27.105822   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:27.105853   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:27.105789   38465 retry.go:31] will retry after 3.414780128s: waiting for machine to come up
	I1104 10:54:30.521832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:30.522159   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:30.522181   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:30.522080   38465 retry.go:31] will retry after 3.340201347s: waiting for machine to come up
	I1104 10:54:33.865562   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:33.865973   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:33.866003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:33.865938   38465 retry.go:31] will retry after 5.278208954s: waiting for machine to come up
	I1104 10:54:39.149712   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150250   37715 main.go:141] libmachine: (ha-931571-m03) Found IP for machine: 192.168.39.57
	I1104 10:54:39.150283   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has current primary IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150292   37715 main.go:141] libmachine: (ha-931571-m03) Reserving static IP address...
	I1104 10:54:39.150676   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find host DHCP lease matching {name: "ha-931571-m03", mac: "52:54:00:30:f5:de", ip: "192.168.39.57"} in network mk-ha-931571
	I1104 10:54:39.223412   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Getting to WaitForSSH function...
	I1104 10:54:39.223438   37715 main.go:141] libmachine: (ha-931571-m03) Reserved static IP address: 192.168.39.57
	I1104 10:54:39.223450   37715 main.go:141] libmachine: (ha-931571-m03) Waiting for SSH to be available...
	I1104 10:54:39.226810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227204   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.227229   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH client type: external
	I1104 10:54:39.227440   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa (-rw-------)
	I1104 10:54:39.227467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:54:39.227480   37715 main.go:141] libmachine: (ha-931571-m03) DBG | About to run SSH command:
	I1104 10:54:39.227493   37715 main.go:141] libmachine: (ha-931571-m03) DBG | exit 0
	I1104 10:54:39.348849   37715 main.go:141] libmachine: (ha-931571-m03) DBG | SSH cmd err, output: <nil>: 
	I1104 10:54:39.349130   37715 main.go:141] libmachine: (ha-931571-m03) KVM machine creation complete!
	I1104 10:54:39.349458   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:39.350011   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350175   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350318   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:54:39.350330   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 10:54:39.351463   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:54:39.351478   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:54:39.351482   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:54:39.351487   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.353807   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.354143   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.354557   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354742   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354871   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.355021   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.355223   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.355234   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:54:39.452207   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.452228   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:54:39.452237   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.455314   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.455778   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.455805   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.456043   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.456250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456440   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456603   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.456750   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.456931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.456953   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:54:39.553854   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:54:39.553946   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:54:39.553963   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:54:39.553975   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554231   37715 buildroot.go:166] provisioning hostname "ha-931571-m03"
	I1104 10:54:39.554253   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554456   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.556992   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557348   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.557377   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557532   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.557736   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.557887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.558007   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.558172   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.558399   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.558418   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m03 && echo "ha-931571-m03" | sudo tee /etc/hostname
	I1104 10:54:39.670668   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m03
	
	I1104 10:54:39.670701   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.674148   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.674492   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674738   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.674887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675053   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.675459   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.675678   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.675703   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:54:39.782022   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.782049   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:54:39.782068   37715 buildroot.go:174] setting up certificates
	I1104 10:54:39.782080   37715 provision.go:84] configureAuth start
	I1104 10:54:39.782091   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.782349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:39.785051   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785459   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.785488   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785656   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.787833   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788124   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.788141   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788305   37715 provision.go:143] copyHostCerts
	I1104 10:54:39.788334   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788369   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:54:39.788378   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788442   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:54:39.788557   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788577   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:54:39.788584   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788610   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:54:39.788656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788673   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:54:39.788679   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788700   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:54:39.788771   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m03 san=[127.0.0.1 192.168.39.57 ha-931571-m03 localhost minikube]
	I1104 10:54:39.906066   37715 provision.go:177] copyRemoteCerts
	I1104 10:54:39.906121   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:54:39.906156   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.909171   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909602   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.909633   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909904   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.910114   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.910451   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.910562   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:39.986932   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:54:39.986995   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:54:40.011798   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:54:40.011899   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:54:40.035728   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:54:40.035811   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:54:40.058737   37715 provision.go:87] duration metric: took 276.643486ms to configureAuth
	I1104 10:54:40.058767   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:54:40.058982   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:40.059060   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.061592   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.061918   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.061947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.062136   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.062313   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062493   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062627   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.062779   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.062931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.062946   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:54:40.285341   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:54:40.285362   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:54:40.285369   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetURL
	I1104 10:54:40.286607   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using libvirt version 6000000
	I1104 10:54:40.288784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289099   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.289130   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289303   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:54:40.289319   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:54:40.289326   37715 client.go:171] duration metric: took 25.618399312s to LocalClient.Create
	I1104 10:54:40.289350   37715 start.go:167] duration metric: took 25.618478892s to libmachine.API.Create "ha-931571"
	I1104 10:54:40.289362   37715 start.go:293] postStartSetup for "ha-931571-m03" (driver="kvm2")
	I1104 10:54:40.289391   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:54:40.289407   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.289628   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:54:40.289653   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.291922   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292338   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.292358   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292590   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.292774   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.292922   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.293081   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.371198   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:54:40.375533   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:54:40.375563   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:54:40.375682   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:54:40.375780   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:54:40.375790   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:54:40.375871   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:54:40.385684   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:40.408674   37715 start.go:296] duration metric: took 119.284792ms for postStartSetup
	I1104 10:54:40.408723   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:40.409449   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.412211   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412561   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.412589   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412888   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:40.413122   37715 start.go:128] duration metric: took 25.760559258s to createHost
	I1104 10:54:40.413150   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.415473   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415825   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.415846   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415970   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.416207   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416371   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416538   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.416702   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.416875   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.416888   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:54:40.513907   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717680.493900775
	
	I1104 10:54:40.513930   37715 fix.go:216] guest clock: 1730717680.493900775
	I1104 10:54:40.513937   37715 fix.go:229] Guest: 2024-11-04 10:54:40.493900775 +0000 UTC Remote: 2024-11-04 10:54:40.413138421 +0000 UTC m=+139.084656658 (delta=80.762354ms)
	I1104 10:54:40.513952   37715 fix.go:200] guest clock delta is within tolerance: 80.762354ms
	I1104 10:54:40.513957   37715 start.go:83] releasing machines lock for "ha-931571-m03", held for 25.861527752s
	I1104 10:54:40.513977   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.514219   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.516861   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.517293   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.517318   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.519824   37715 out.go:177] * Found network options:
	I1104 10:54:40.521282   37715 out.go:177]   - NO_PROXY=192.168.39.67,192.168.39.245
	W1104 10:54:40.522546   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.522569   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.522586   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523386   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523502   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:54:40.523543   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	W1104 10:54:40.523621   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.523648   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.523705   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:54:40.523726   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.526526   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526600   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526878   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526907   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526933   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.527005   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527307   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527380   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527467   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527533   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.527573   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527722   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.761284   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:54:40.766951   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:54:40.767028   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:54:40.784061   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:54:40.784083   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:54:40.784139   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:54:40.799767   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:54:40.814033   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:54:40.814100   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:54:40.828095   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:54:40.843053   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:54:40.959422   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:54:41.119792   37715 docker.go:233] disabling docker service ...
	I1104 10:54:41.119859   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:54:41.134123   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:54:41.147262   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:54:41.281486   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:54:41.401330   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:54:41.415018   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:54:41.433640   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:54:41.433713   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.444506   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:54:41.444582   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.456767   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.467306   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.477809   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:54:41.488160   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.498689   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.515679   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.526763   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:54:41.536412   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:54:41.536469   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:54:41.549448   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:54:41.559807   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:41.665655   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:54:41.758091   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:54:41.758187   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:54:41.762517   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:54:41.762572   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:54:41.766429   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:54:41.804303   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:54:41.804420   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.830473   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.860302   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:54:41.861621   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:54:41.863004   37715 out.go:177]   - env NO_PROXY=192.168.39.67,192.168.39.245
	I1104 10:54:41.864263   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:41.867052   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867423   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:41.867446   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867651   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:54:41.871716   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:41.884015   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:54:41.884230   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:41.884480   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.884518   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.900117   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I1104 10:54:41.900610   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.901163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.901184   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.901516   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.901701   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:54:41.903124   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:41.903396   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.903433   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.918029   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I1104 10:54:41.918566   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.919028   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.919050   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.919333   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.919520   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:41.919673   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.57
	I1104 10:54:41.919684   37715 certs.go:194] generating shared ca certs ...
	I1104 10:54:41.919697   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:41.919810   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:54:41.919845   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:54:41.919854   37715 certs.go:256] generating profile certs ...
	I1104 10:54:41.919922   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:54:41.919946   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd
	I1104 10:54:41.919960   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 10:54:42.049039   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd ...
	I1104 10:54:42.049068   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd: {Name:mk425b204dd51c6129591dbbf4cda0b66e34eb56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049239   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd ...
	I1104 10:54:42.049250   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd: {Name:mk1230635dbd65cb8c7d025a3549f17dc35e060e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049322   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:54:42.049449   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:54:42.049564   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:54:42.049580   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:54:42.049595   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:54:42.049608   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:54:42.049621   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:54:42.049634   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:54:42.049647   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:54:42.049657   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:54:42.049669   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:54:42.049713   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:54:42.049741   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:54:42.049750   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:54:42.049771   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:54:42.049799   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:54:42.049819   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:54:42.049855   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:42.049880   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.049893   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.049905   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.049934   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:42.052637   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053074   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:42.053102   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053289   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:42.053475   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:42.053607   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:42.053769   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:42.125617   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:54:42.129901   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:54:42.141111   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:54:42.145054   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:54:42.154954   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:54:42.158822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:54:42.168976   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:54:42.172887   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:54:42.182649   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:54:42.186455   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:54:42.196466   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:54:42.200376   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:54:42.211239   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:54:42.236618   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:54:42.260726   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:54:42.283147   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:54:42.305271   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1104 10:54:42.327703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:54:42.350340   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:54:42.372114   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:54:42.394125   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:54:42.415761   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:54:42.437284   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:54:42.458545   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:54:42.474091   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:54:42.489871   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:54:42.505378   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:54:42.521116   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:54:42.537323   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:54:42.553306   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:54:42.569157   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:54:42.574422   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:54:42.584560   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588538   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588592   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.594056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:54:42.604559   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:54:42.615717   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619821   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619868   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.625153   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:54:42.638993   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:54:42.649427   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653431   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653483   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.658834   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:54:42.670960   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:54:42.675173   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:54:42.675237   37715 kubeadm.go:934] updating node {m03 192.168.39.57 8443 v1.31.2 crio true true} ...
	I1104 10:54:42.675332   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:54:42.675370   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:54:42.675419   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:54:42.692549   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:54:42.692627   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:54:42.692680   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.702705   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:54:42.702768   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.712640   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:54:42.712662   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712660   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1104 10:54:42.712682   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712648   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1104 10:54:42.712715   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712727   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712752   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:42.718694   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:54:42.718732   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:54:42.746213   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.746221   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:54:42.746258   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:54:42.746334   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.789088   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:54:42.789130   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:54:43.556894   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:54:43.566649   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1104 10:54:43.583297   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:54:43.599783   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:54:43.615935   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:54:43.619736   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:43.632102   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:43.769468   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:54:43.787176   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:43.787522   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:43.787559   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:43.803438   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I1104 10:54:43.803811   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:43.804247   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:43.804266   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:43.804582   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:43.804752   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:43.804873   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:54:43.805017   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:54:43.805035   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:43.808407   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808840   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:43.808868   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808996   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:43.809168   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:43.809326   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:43.809457   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:43.953404   37715 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:43.953450   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443"
	I1104 10:55:05.442467   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443": (21.488974658s)
	I1104 10:55:05.442503   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:55:05.990844   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m03 minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:55:06.139537   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:55:06.285616   37715 start.go:319] duration metric: took 22.480737326s to joinCluster
	I1104 10:55:06.285694   37715 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:55:06.286003   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:55:06.288554   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:55:06.289975   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:55:06.546650   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:55:06.605631   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:55:06.605981   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:55:06.606063   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:55:06.606329   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:06.606418   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:06.606434   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:06.606445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:06.606456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:06.609914   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.107514   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.107534   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.111083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.606560   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.606587   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.606600   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.606605   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.613411   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:55:08.107538   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.107560   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.107567   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.107570   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.606539   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.606559   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.606567   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.606571   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.609675   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.610356   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:09.106606   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.106630   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.106639   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.106644   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.109657   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:09.607102   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.607123   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.607131   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.607135   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.610601   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.106839   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.106861   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.106872   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.106887   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.110421   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.607151   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.607178   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.607190   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.607195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.610313   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.611052   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:11.107465   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.107489   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.107500   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.107505   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.134933   37715 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1104 10:55:11.607114   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.607137   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.607145   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.607149   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.610404   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.107512   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.107532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.606667   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.606689   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.606701   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.606705   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.609952   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.106734   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.106769   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.106780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.106786   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.110063   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.110550   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:13.607192   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.607222   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.607237   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.607241   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.610250   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:14.106526   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.106548   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.106556   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.106560   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.110076   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:14.606584   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.606604   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.606612   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.606622   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.609643   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.106797   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.106819   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.106826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.106830   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.110526   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.111303   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:15.606581   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.606631   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.606643   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.606648   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.609879   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.107000   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.107025   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.107036   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.107042   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.110279   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.607359   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.607381   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.607391   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.607398   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.610655   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.106684   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.106706   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.106716   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.106722   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.109976   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.607162   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.607182   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.607190   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.607194   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.610739   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.611443   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:18.106827   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.106850   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.106858   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.106862   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.110271   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:18.607389   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.607411   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.607419   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.607422   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.612587   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:19.106763   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.106784   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.106791   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.106795   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.110156   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:19.607506   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.607532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.607540   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.607545   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.611651   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:19.612446   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:20.107336   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.107356   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.107364   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.107368   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.110541   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:20.607455   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.607477   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.607485   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.607488   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.610742   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:21.106794   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.106815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.106823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.106827   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.109773   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:21.607002   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.607022   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.607030   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.607033   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.609863   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:22.106940   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.106962   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.106970   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.106981   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.110219   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:22.110873   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:22.607233   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.607256   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.607267   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.607272   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.610320   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.107234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.107261   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.107272   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.107278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.110559   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.607522   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.607544   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.607552   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.607557   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.610843   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.611437   37715 node_ready.go:49] node "ha-931571-m03" has status "Ready":"True"
	I1104 10:55:23.611454   37715 node_ready.go:38] duration metric: took 17.005106707s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:23.611469   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:23.611529   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:23.611538   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.611545   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.611550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.616487   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:23.623329   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.623422   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:55:23.623428   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.623436   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.623440   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.626812   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.627478   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.627500   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.627509   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.627513   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.630024   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.630705   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.630725   37715 pod_ready.go:82] duration metric: took 7.365313ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630737   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:55:23.630815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.630826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.630835   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.633089   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.633668   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.633688   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.633703   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.633714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.635922   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.636490   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.636510   37715 pod_ready.go:82] duration metric: took 5.760939ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636522   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636583   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:55:23.636592   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.636602   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.636610   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.639359   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.639900   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.639915   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.639922   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.639925   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.642474   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.642946   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.642963   37715 pod_ready.go:82] duration metric: took 6.432226ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.642971   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.643028   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:55:23.643036   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.643043   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.643047   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.645331   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.646060   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:23.646073   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.646080   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.646084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.648315   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.648847   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.648862   37715 pod_ready.go:82] duration metric: took 5.88444ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.648869   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.808246   37715 request.go:632] Waited for 159.312664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808309   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.808316   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.808320   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.811540   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.007952   37715 request.go:632] Waited for 195.768208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008033   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008045   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.008056   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.008066   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.011083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.011703   37715 pod_ready.go:93] pod "etcd-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.011724   37715 pod_ready.go:82] duration metric: took 362.848542ms for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.011739   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.207843   37715 request.go:632] Waited for 196.043868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207918   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207925   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.207937   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.207947   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.211127   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.408352   37715 request.go:632] Waited for 196.308065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408442   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408450   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.408460   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.408469   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.411644   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.412279   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.412297   37715 pod_ready.go:82] duration metric: took 400.550124ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.412310   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.608501   37715 request.go:632] Waited for 196.123497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608572   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608580   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.608590   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.608596   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.612062   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.808253   37715 request.go:632] Waited for 195.326237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808332   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808343   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.808352   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.808358   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.811435   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.811848   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.811868   37715 pod_ready.go:82] duration metric: took 399.549963ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.811877   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.008126   37715 request.go:632] Waited for 196.158524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008216   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008224   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.008232   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.008237   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.011898   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.207886   37715 request.go:632] Waited for 195.224715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207967   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207975   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.207983   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.207987   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.211174   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.211794   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.211815   37715 pod_ready.go:82] duration metric: took 399.930178ms for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.211828   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.407990   37715 request.go:632] Waited for 196.084804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408049   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408054   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.408062   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.408065   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.411212   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.608267   37715 request.go:632] Waited for 196.399136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608341   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608348   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.608358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.608363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.611599   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.612277   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.612297   37715 pod_ready.go:82] duration metric: took 400.459599ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.612307   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.808295   37715 request.go:632] Waited for 195.907201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808358   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808364   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.808371   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.808379   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.811856   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.007942   37715 request.go:632] Waited for 195.386929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008009   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008020   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.008034   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.008043   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.010794   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.011251   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.011269   37715 pod_ready.go:82] duration metric: took 398.955793ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.011279   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.207834   37715 request.go:632] Waited for 196.482261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207909   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.207934   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.207939   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.211083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.407914   37715 request.go:632] Waited for 196.093119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407994   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407999   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.408006   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.408012   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.411522   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.412011   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.412034   37715 pod_ready.go:82] duration metric: took 400.747328ms for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.412048   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.608324   37715 request.go:632] Waited for 196.200888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608407   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608414   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.608430   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.608437   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.611990   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.808246   37715 request.go:632] Waited for 195.355588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808300   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.808308   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.808311   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.811118   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.811682   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.811705   37715 pod_ready.go:82] duration metric: took 399.648214ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.811718   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.008596   37715 request.go:632] Waited for 196.775543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008677   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.008685   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.008691   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.012209   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.208175   37715 request.go:632] Waited for 195.363562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208240   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.208247   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.208250   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.211552   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.212061   37715 pod_ready.go:93] pod "kube-proxy-ttq4z" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.212084   37715 pod_ready.go:82] duration metric: took 400.357853ms for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.212098   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.408120   37715 request.go:632] Waited for 195.934645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408180   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.408188   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.408194   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.411594   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.607502   37715 request.go:632] Waited for 195.309631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607589   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607599   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.607611   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.607621   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.610707   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.611551   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.611571   37715 pod_ready.go:82] duration metric: took 399.465223ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.611584   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.807587   37715 request.go:632] Waited for 195.935372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807677   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807686   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.807694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.807697   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.810852   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.007894   37715 request.go:632] Waited for 196.377136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007943   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007948   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.007955   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.007959   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.010780   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:28.011225   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.011242   37715 pod_ready.go:82] duration metric: took 399.65101ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.011252   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.208327   37715 request.go:632] Waited for 197.007106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208398   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208406   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.208412   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.208417   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.211868   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.407823   37715 request.go:632] Waited for 195.386338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407915   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.407929   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.407936   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.411100   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.411750   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.411766   37715 pod_ready.go:82] duration metric: took 400.505326ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.411776   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.607873   37715 request.go:632] Waited for 196.030747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607978   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607989   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.607996   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.607999   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.611695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.807696   37715 request.go:632] Waited for 195.284295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807770   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807776   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.807783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.807788   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.811278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.812008   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.812025   37715 pod_ready.go:82] duration metric: took 400.242831ms for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.812037   37715 pod_ready.go:39] duration metric: took 5.200555034s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:28.812050   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:55:28.812101   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:55:28.825529   37715 api_server.go:72] duration metric: took 22.539799278s to wait for apiserver process to appear ...
	I1104 10:55:28.825558   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:55:28.825578   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:55:28.829724   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:55:28.829787   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:55:28.829795   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.829803   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.829807   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.830888   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:55:28.830964   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:55:28.830984   37715 api_server.go:131] duration metric: took 5.41894ms to wait for apiserver health ...
	I1104 10:55:28.830996   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:55:29.008134   37715 request.go:632] Waited for 177.060621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008207   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008237   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.008252   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.008298   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.014200   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.021556   37715 system_pods.go:59] 24 kube-system pods found
	I1104 10:55:29.021592   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.021600   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.021611   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.021616   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.021627   37715 system_pods.go:61] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.021633   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.021643   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.021649   37715 system_pods.go:61] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.021653   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.021658   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.021673   37715 system_pods.go:61] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.021679   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.021684   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.021689   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.021694   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.021703   37715 system_pods.go:61] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.021708   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.021714   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.021718   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.021723   37715 system_pods.go:61] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.021739   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021748   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021757   37715 system_pods.go:61] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021768   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.021776   37715 system_pods.go:74] duration metric: took 190.77233ms to wait for pod list to return data ...
	I1104 10:55:29.021785   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:55:29.207606   37715 request.go:632] Waited for 185.728415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207676   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.207686   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.207695   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.218692   37715 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1104 10:55:29.218828   37715 default_sa.go:45] found service account: "default"
	I1104 10:55:29.218847   37715 default_sa.go:55] duration metric: took 197.054864ms for default service account to be created ...
	I1104 10:55:29.218857   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:55:29.408474   37715 request.go:632] Waited for 189.535523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408534   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408539   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.408546   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.408550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.414296   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.422499   37715 system_pods.go:86] 24 kube-system pods found
	I1104 10:55:29.422532   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.422537   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.422541   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.422545   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.422549   37715 system_pods.go:89] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.422553   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.422557   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.422560   37715 system_pods.go:89] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.422563   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.422567   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.422571   37715 system_pods.go:89] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.422576   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.422582   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.422588   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.422593   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.422598   37715 system_pods.go:89] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.422604   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.422614   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.422621   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.422624   37715 system_pods.go:89] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.422633   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422642   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422650   37715 system_pods.go:89] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422656   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.422665   37715 system_pods.go:126] duration metric: took 203.801845ms to wait for k8s-apps to be running ...
	I1104 10:55:29.422676   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:55:29.422727   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:55:29.439259   37715 system_svc.go:56] duration metric: took 16.56809ms WaitForService to wait for kubelet
	I1104 10:55:29.439296   37715 kubeadm.go:582] duration metric: took 23.153569026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:55:29.439318   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:55:29.607660   37715 request.go:632] Waited for 168.244277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607713   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607718   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.607726   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.607732   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.611371   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:29.612755   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612781   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612794   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612800   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612807   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612811   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612817   37715 node_conditions.go:105] duration metric: took 173.492197ms to run NodePressure ...
	I1104 10:55:29.612832   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:55:29.612860   37715 start.go:255] writing updated cluster config ...
	I1104 10:55:29.613201   37715 ssh_runner.go:195] Run: rm -f paused
	I1104 10:55:29.662232   37715 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 10:55:29.664453   37715 out.go:177] * Done! kubectl is now configured to use "ha-931571" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.817905024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717959817883809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=027aabd4-a1b3-402b-a7b4-11aa48a2e122 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.818332289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4efc6a4-269f-4ed7-a2e5-dd05d82225c3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.818396372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4efc6a4-269f-4ed7-a2e5-dd05d82225c3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.818616468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4efc6a4-269f-4ed7-a2e5-dd05d82225c3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.854371339Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a52b12a2-efc7-484f-900e-0b141775b69c name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.854460105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a52b12a2-efc7-484f-900e-0b141775b69c name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.855492132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23fb01dc-fc23-4a10-871b-6f8d11babc92 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.856061469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717959856040849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23fb01dc-fc23-4a10-871b-6f8d11babc92 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.856546611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7410c8d6-ff8b-42e1-b192-debea4fa7de2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.856627661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7410c8d6-ff8b-42e1-b192-debea4fa7de2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.856881344Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7410c8d6-ff8b-42e1-b192-debea4fa7de2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.891998479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8688f8a2-8997-4005-ac22-96d6ae0de985 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.892082841Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8688f8a2-8997-4005-ac22-96d6ae0de985 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.893180688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2adce106-8b84-4af8-b9e4-6b66f5ca82b1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.893604239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717959893583179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2adce106-8b84-4af8-b9e4-6b66f5ca82b1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.894117867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0a19604-594c-4a7a-b64e-01b18497e26e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.894186412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0a19604-594c-4a7a-b64e-01b18497e26e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.894405277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0a19604-594c-4a7a-b64e-01b18497e26e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.930630215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0975a17f-8d62-4428-a9cc-552a8c250149 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.930754626Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0975a17f-8d62-4428-a9cc-552a8c250149 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.932066538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e994bb8-2936-4cee-8d8f-36197db09ffb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.932750039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717959932719982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e994bb8-2936-4cee-8d8f-36197db09ffb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.933343752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ace913c-bfe8-4ab0-8d04-b8db5802d7c1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.933408524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ace913c-bfe8-4ab0-8d04-b8db5802d7c1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:19 ha-931571 crio[659]: time="2024-11-04 10:59:19.933630333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ace913c-bfe8-4ab0-8d04-b8db5802d7c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	801830521b8c6       77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488                                      26 seconds ago      Exited              kube-vip                  7                   c376c65bb2b6b       kube-vip-ha-931571
	ecc02a44b9547       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ca422d1f835b4       busybox-7dff88458-nslmz
	400aa38b53356       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   c6e22705ccc18       coredns-7c65d6cfc9-s9wb4
	49e75724c5ead       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   bcbca8745afa7       coredns-7c65d6cfc9-5ss4v
	f8efbd7a72ea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b15baa796a09e       storage-provisioner
	4401315f385bf       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   220337aaf496c       kindnet-2n2ws
	6e592fe17c5f7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   88e06a89dd6f2       kube-proxy-bvk6r
	e50ab0290e7c2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   b36f0d25b985a       kube-scheduler-ha-931571
	4572c8bcb28cd       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   9659e6073c7ae       kube-controller-manager-ha-931571
	82e4be064be10       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d779a632ccdca       kube-apiserver-ha-931571
	f2d32daf142ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   76529e2f353a6       etcd-ha-931571
	
	
	==> coredns [400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457] <==
	[INFO] 10.244.0.4:50237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150549s
	[INFO] 10.244.0.4:46253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001843568s
	[INFO] 10.244.0.4:55713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184256s
	[INFO] 10.244.0.4:40615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215052s
	[INFO] 10.244.0.4:48280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078576s
	[INFO] 10.244.0.4:54787 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130955s
	[INFO] 10.244.1.2:58741 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002139116s
	[INFO] 10.244.1.2:37960 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110836s
	[INFO] 10.244.1.2:58623 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109212s
	[INFO] 10.244.1.2:51618 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00158249s
	[INFO] 10.244.1.2:43015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087484s
	[INFO] 10.244.1.2:39492 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171988s
	[INFO] 10.244.2.2:48038 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132123s
	[INFO] 10.244.0.4:35814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180509s
	[INFO] 10.244.0.4:60410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089999s
	[INFO] 10.244.0.4:47053 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039998s
	[INFO] 10.244.1.2:58250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164547s
	[INFO] 10.244.1.2:52533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169574s
	[INFO] 10.244.2.2:44494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181065s
	[INFO] 10.244.2.2:58013 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023451s
	[INFO] 10.244.2.2:52479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131262s
	[INFO] 10.244.0.4:40569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209971s
	[INFO] 10.244.0.4:39524 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112991s
	[INFO] 10.244.0.4:47233 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143713s
	[INFO] 10.244.1.2:40992 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169174s
	
	
	==> coredns [49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48964 - 23647 "HINFO IN 8987446281611230695.8255749056578627230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085188681s
	[INFO] 10.244.2.2:34961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003596703s
	[INFO] 10.244.0.4:37004 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010865s
	[INFO] 10.244.0.4:53184 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001905017s
	[INFO] 10.244.1.2:58428 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083838s
	[INFO] 10.244.1.2:60855 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001943834s
	[INFO] 10.244.2.2:42530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210297s
	[INFO] 10.244.2.2:45691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000254098s
	[INFO] 10.244.2.2:54453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116752s
	[INFO] 10.244.0.4:49389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000239128s
	[INFO] 10.244.0.4:50445 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078508s
	[INFO] 10.244.1.2:33136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123784s
	[INFO] 10.244.1.2:60974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079916s
	[INFO] 10.244.2.2:49080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171041s
	[INFO] 10.244.2.2:43340 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142924s
	[INFO] 10.244.2.2:43789 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094712s
	[INFO] 10.244.0.4:32943 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072704s
	[INFO] 10.244.1.2:50464 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118885s
	[INFO] 10.244.1.2:36951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148048s
	[INFO] 10.244.2.2:50644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135678s
	[INFO] 10.244.0.4:38496 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001483s
	[INFO] 10.244.1.2:59424 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211313s
	[INFO] 10.244.1.2:33660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134208s
	[INFO] 10.244.1.2:34489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138513s
	
	
	==> describe nodes <==
	Name:               ha-931571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:53:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-931571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5397aa0c862f4705b75b9757490651ea
	  System UUID:                5397aa0c-862f-4705-b75b-9757490651ea
	  Boot ID:                    17751c92-c71f-4e82-afb4-12da82035155
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nslmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 coredns-7c65d6cfc9-5ss4v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-s9wb4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-931571                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-2n2ws                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-931571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-931571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-bvk6r                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-931571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-931571                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m15s  kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s  kubelet          Node ha-931571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s  kubelet          Node ha-931571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s  kubelet          Node ha-931571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  NodeReady                6m2s   kubelet          Node ha-931571 status is now: NodeReady
	  Normal  RegisteredNode           5m23s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	
	
	Name:               ha-931571-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:53:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:56:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-931571-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06772ff96588423e9dc77ed49845e534
	  System UUID:                06772ff9-6588-423e-9dc7-7ed49845e534
	  Boot ID:                    74d940a3-5941-40ed-b058-45da0bd2f171
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9wmp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-931571-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m29s
	  kube-system                 kindnet-bg4z6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m31s
	  kube-system                 kube-apiserver-ha-931571-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-controller-manager-ha-931571-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-proxy-wz92s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-scheduler-ha-931571-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-vip-ha-931571-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  Starting                 5m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node ha-931571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x7 over 5m31s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-931571-m02 status is now: NodeNotReady
	
	
	Name:               ha-931571-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:55:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-931571-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b21e133cd17b4b699323cc6d9f47f565
	  System UUID:                b21e133c-d17b-4b69-9323-cc6d9f47f565
	  Boot ID:                    50ec73f3-3253-4df5-83ed-277786faa385
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lqgb9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-931571-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m15s
	  kube-system                 kindnet-w2jwt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-931571-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-931571-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-ttq4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-931571-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-931571-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m17s                  cidrAllocator    Node ha-931571-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m18s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m18s)  kubelet          Node ha-931571-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m18s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	
	
	Name:               ha-931571-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-931571-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 851b57db90dc4e65909090eed2536ea8
	  System UUID:                851b57db-90dc-4e65-9090-90eed2536ea8
	  Boot ID:                    be99e848-d7b5-4c3a-990d-5dd7890c841c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x8ptv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-s8gg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     3m14s                  cidrAllocator    Node ha-931571-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-931571-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-931571-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 4 10:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047726] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036586] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779631] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.763191] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.537421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.904587] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.060497] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062176] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.155966] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.126824] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.243725] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.719760] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.831679] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.057052] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249250] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.693317] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[Nov 4 10:53] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.046787] kauditd_printk_skb: 41 callbacks suppressed
	[ +27.005860] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c] <==
	{"level":"warn","ts":"2024-11-04T10:59:20.186043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.189464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.198097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.204314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.210317Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.214487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.222348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.222430Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.231801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.238622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.245941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.252225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.255840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.266029Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"df641d035a901564","rtt":"753.128µs","error":"dial tcp 192.168.39.245:2380: i/o timeout"}
	{"level":"warn","ts":"2024-11-04T10:59:20.266085Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"df641d035a901564","rtt":"8.421984ms","error":"dial tcp 192.168.39.245:2380: i/o timeout"}
	{"level":"warn","ts":"2024-11-04T10:59:20.297258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.302986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.308790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.312511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.315280Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.318968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.322574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.324362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.329620Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:20.370448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:59:20 up 6 min,  0 users,  load average: 0.22, 0.31, 0.15
	Linux ha-931571 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0] <==
	I1104 10:58:47.933532       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:58:57.931888       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:58:57.931969       1 main.go:301] handling current node
	I1104 10:58:57.931997       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:58:57.932015       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:58:57.932703       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:58:57.932784       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:58:57.933003       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:58:57.933029       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.925895       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:07.925959       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:07.926150       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:07.926172       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.926258       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:07.926276       1 main.go:301] handling current node
	I1104 10:59:07.926287       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:07.926292       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:59:17.932116       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:17.932223       1 main.go:301] handling current node
	I1104 10:59:17.932253       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:17.932271       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:59:17.932486       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:17.932519       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:17.932614       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:17.932635       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150] <==
	I1104 10:52:57.529011       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1104 10:52:57.636067       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1104 10:52:58.624832       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1104 10:52:58.639937       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1104 10:52:58.805171       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1104 10:53:03.087294       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1104 10:53:03.287753       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1104 10:53:50.685836       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="2a13690c-2b7c-4af7-94a1-2fcd1065da04"
	E1104 10:53:50.685933       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.903µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1104 10:55:34.753652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57932: use of closed network connection
	E1104 10:55:34.925834       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57948: use of closed network connection
	E1104 10:55:35.093653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57972: use of closed network connection
	E1104 10:55:35.274875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57992: use of closed network connection
	E1104 10:55:35.447438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58008: use of closed network connection
	E1104 10:55:35.612882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58018: use of closed network connection
	E1104 10:55:35.778454       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58044: use of closed network connection
	E1104 10:55:35.949313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58070: use of closed network connection
	E1104 10:55:36.116046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58086: use of closed network connection
	E1104 10:55:36.394559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58120: use of closed network connection
	E1104 10:55:36.560067       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58130: use of closed network connection
	E1104 10:55:36.741903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58146: use of closed network connection
	E1104 10:55:36.920290       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58160: use of closed network connection
	E1104 10:55:37.097281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58172: use of closed network connection
	E1104 10:55:37.276505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58204: use of closed network connection
	W1104 10:57:07.528371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.57 192.168.39.67]
	
	
	==> kube-controller-manager [4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc] <==
	I1104 10:56:02.327738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571"
	I1104 10:56:04.592818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m03"
	I1104 10:56:06.541409       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-931571-m04\" does not exist"
	I1104 10:56:06.575948       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-931571-m04" podCIDRs=["10.244.3.0/24"]
	I1104 10:56:06.576008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.576040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.730053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.090693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.683331       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-931571-m04"
	I1104 10:56:07.724925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.198433       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.234463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:16.862581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:56:26.200074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.386370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:36.943150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:57:21.411213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:57:21.411471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.433152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.545878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.838445ms"
	I1104 10:57:21.546123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.292µs"
	I1104 10:57:22.718407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:26.623482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	
	
	==> kube-proxy [6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 10:53:04.203851       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 10:53:04.229581       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E1104 10:53:04.229781       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 10:53:04.282192       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 10:53:04.282221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 10:53:04.282244       1 server_linux.go:169] "Using iptables Proxier"
	I1104 10:53:04.285593       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 10:53:04.285958       1 server.go:483] "Version info" version="v1.31.2"
	I1104 10:53:04.285985       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 10:53:04.288139       1 config.go:199] "Starting service config controller"
	I1104 10:53:04.288173       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 10:53:04.290392       1 config.go:105] "Starting endpoint slice config controller"
	I1104 10:53:04.290557       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 10:53:04.291547       1 config.go:328] "Starting node config controller"
	I1104 10:53:04.292932       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 10:53:04.389214       1 shared_informer.go:320] Caches are synced for service config
	I1104 10:53:04.391802       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 10:53:04.393273       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c] <==
	W1104 10:52:57.001881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1104 10:52:57.001927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.141748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1104 10:52:57.141796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.201248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1104 10:52:57.201310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1104 10:52:58.585064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 10:55:30.513828       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="641f6861-b035-49a8-832b-70b7a069afb3" pod="default/busybox-7dff88458-lqgb9" assumedNode="ha-931571-m03" currentNode="ha-931571-m02"
	E1104 10:55:30.530615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m02"
	E1104 10:55:30.530773       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 641f6861-b035-49a8-832b-70b7a069afb3(default/busybox-7dff88458-lqgb9) was assumed on ha-931571-m02 but assigned to ha-931571-m03" pod="default/busybox-7dff88458-lqgb9"
	E1104 10:55:30.530821       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" pod="default/busybox-7dff88458-lqgb9"
	I1104 10:55:30.530854       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m03"
	E1104 10:55:30.571464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572521       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 68017266-8187-488d-ab36-2a5af294fa2e(default/busybox-7dff88458-nslmz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nslmz"
	E1104 10:55:30.572641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" pod="default/busybox-7dff88458-nslmz"
	I1104 10:55:30.572740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572411       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.573133       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 84b6e653-b685-4c00-ac2f-d650738a613b(default/busybox-7dff88458-w9wmp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9wmp"
	E1104 10:55:30.573206       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" pod="default/busybox-7dff88458-w9wmp"
	I1104 10:55:30.573228       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.792999       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-5nt9m\" not found" pod="default/busybox-7dff88458-5nt9m"
	E1104 10:56:06.602004       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	E1104 10:56:06.602261       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c786786d-b4b5-4479-b5df-24cc8f346e86(kube-system/kube-proxy-s8gg7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s8gg7"
	E1104 10:56:06.602358       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" pod="kube-system/kube-proxy-s8gg7"
	I1104 10:56:06.602540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	
	
	==> kubelet <==
	Nov 04 10:58:30 ha-931571 kubelet[1360]: I1104 10:58:30.785581    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:30 ha-931571 kubelet[1360]: E1104 10:58:30.785757    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:58:38 ha-931571 kubelet[1360]: E1104 10:58:38.871501    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717918871014143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:38 ha-931571 kubelet[1360]: E1104 10:58:38.871524    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717918871014143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:42 ha-931571 kubelet[1360]: I1104 10:58:42.786581    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:42 ha-931571 kubelet[1360]: E1104 10:58:42.791316    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872774    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872859    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:53 ha-931571 kubelet[1360]: I1104 10:58:53.785072    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.819237    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874071    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874093    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.144622    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.145089    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: E1104 10:59:00.145270    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878363    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878627    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: I1104 10:59:14.786026    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: E1104 10:59:14.786168    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:18 ha-931571 kubelet[1360]: E1104 10:59:18.881691    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717958881254516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:18 ha-931571 kubelet[1360]: E1104 10:59:18.881729    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717958881254516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:261: (dbg) Run:  kubectl --context ha-931571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr: (4.254014166s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.270136579s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m03_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-931571 node start m02 -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:52:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:52:21.364935   37715 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:52:21.365025   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365032   37715 out.go:358] Setting ErrFile to fd 2...
	I1104 10:52:21.365036   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365213   37715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:52:21.365784   37715 out.go:352] Setting JSON to false
	I1104 10:52:21.366601   37715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5692,"bootTime":1730711849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:52:21.366686   37715 start.go:139] virtualization: kvm guest
	I1104 10:52:21.368805   37715 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:52:21.370048   37715 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:52:21.370105   37715 notify.go:220] Checking for updates...
	I1104 10:52:21.372521   37715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:52:21.373968   37715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:52:21.375378   37715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.376837   37715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:52:21.378230   37715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:52:21.379614   37715 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:52:21.414672   37715 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 10:52:21.416078   37715 start.go:297] selected driver: kvm2
	I1104 10:52:21.416092   37715 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:52:21.416103   37715 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:52:21.416883   37715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.416970   37715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:52:21.432886   37715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:52:21.432946   37715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:52:21.433171   37715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:52:21.433208   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:21.433267   37715 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1104 10:52:21.433278   37715 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1104 10:52:21.433324   37715 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:21.433412   37715 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.435216   37715 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 10:52:21.436574   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:21.436609   37715 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:52:21.436618   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:52:21.436693   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:52:21.436705   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:52:21.436992   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:21.437018   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json: {Name:mke118782614f4d89fa0f6507dfdc64c536a0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:21.437163   37715 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:52:21.437221   37715 start.go:364] duration metric: took 42.218µs to acquireMachinesLock for "ha-931571"
	I1104 10:52:21.437267   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:52:21.437337   37715 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 10:52:21.438936   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:52:21.439063   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:52:21.439107   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:52:21.453699   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1104 10:52:21.454132   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:52:21.454653   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:52:21.454675   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:52:21.455002   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:52:21.455150   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:21.455275   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:21.455438   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:52:21.455470   37715 client.go:168] LocalClient.Create starting
	I1104 10:52:21.455500   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:52:21.455528   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455541   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455581   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:52:21.455599   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455610   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455624   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:52:21.455633   37715 main.go:141] libmachine: (ha-931571) Calling .PreCreateCheck
	I1104 10:52:21.455911   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:21.456291   37715 main.go:141] libmachine: Creating machine...
	I1104 10:52:21.456304   37715 main.go:141] libmachine: (ha-931571) Calling .Create
	I1104 10:52:21.456440   37715 main.go:141] libmachine: (ha-931571) Creating KVM machine...
	I1104 10:52:21.457741   37715 main.go:141] libmachine: (ha-931571) DBG | found existing default KVM network
	I1104 10:52:21.458392   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.458262   37738 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1104 10:52:21.458442   37715 main.go:141] libmachine: (ha-931571) DBG | created network xml: 
	I1104 10:52:21.458465   37715 main.go:141] libmachine: (ha-931571) DBG | <network>
	I1104 10:52:21.458474   37715 main.go:141] libmachine: (ha-931571) DBG |   <name>mk-ha-931571</name>
	I1104 10:52:21.458487   37715 main.go:141] libmachine: (ha-931571) DBG |   <dns enable='no'/>
	I1104 10:52:21.458498   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458510   37715 main.go:141] libmachine: (ha-931571) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1104 10:52:21.458517   37715 main.go:141] libmachine: (ha-931571) DBG |     <dhcp>
	I1104 10:52:21.458526   37715 main.go:141] libmachine: (ha-931571) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1104 10:52:21.458536   37715 main.go:141] libmachine: (ha-931571) DBG |     </dhcp>
	I1104 10:52:21.458547   37715 main.go:141] libmachine: (ha-931571) DBG |   </ip>
	I1104 10:52:21.458556   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458566   37715 main.go:141] libmachine: (ha-931571) DBG | </network>
	I1104 10:52:21.458577   37715 main.go:141] libmachine: (ha-931571) DBG | 
	I1104 10:52:21.463306   37715 main.go:141] libmachine: (ha-931571) DBG | trying to create private KVM network mk-ha-931571 192.168.39.0/24...
	I1104 10:52:21.529269   37715 main.go:141] libmachine: (ha-931571) DBG | private KVM network mk-ha-931571 192.168.39.0/24 created
	I1104 10:52:21.529311   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.529188   37738 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.529329   37715 main.go:141] libmachine: (ha-931571) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.529347   37715 main.go:141] libmachine: (ha-931571) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:52:21.529364   37715 main.go:141] libmachine: (ha-931571) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:52:21.775859   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.775727   37738 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa...
	I1104 10:52:21.860057   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.859924   37738 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk...
	I1104 10:52:21.860086   37715 main.go:141] libmachine: (ha-931571) DBG | Writing magic tar header
	I1104 10:52:21.860102   37715 main.go:141] libmachine: (ha-931571) DBG | Writing SSH key tar header
	I1104 10:52:21.860115   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.860035   37738 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.860131   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571
	I1104 10:52:21.860191   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:52:21.860213   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.860225   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 (perms=drwx------)
	I1104 10:52:21.860235   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:52:21.860254   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:52:21.860267   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:52:21.860276   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:52:21.860287   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home
	I1104 10:52:21.860298   37715 main.go:141] libmachine: (ha-931571) DBG | Skipping /home - not owner
	I1104 10:52:21.860370   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:52:21.860424   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:52:21.860440   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:52:21.860450   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:52:21.860468   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:21.861289   37715 main.go:141] libmachine: (ha-931571) define libvirt domain using xml: 
	I1104 10:52:21.861306   37715 main.go:141] libmachine: (ha-931571) <domain type='kvm'>
	I1104 10:52:21.861313   37715 main.go:141] libmachine: (ha-931571)   <name>ha-931571</name>
	I1104 10:52:21.861320   37715 main.go:141] libmachine: (ha-931571)   <memory unit='MiB'>2200</memory>
	I1104 10:52:21.861328   37715 main.go:141] libmachine: (ha-931571)   <vcpu>2</vcpu>
	I1104 10:52:21.861340   37715 main.go:141] libmachine: (ha-931571)   <features>
	I1104 10:52:21.861356   37715 main.go:141] libmachine: (ha-931571)     <acpi/>
	I1104 10:52:21.861372   37715 main.go:141] libmachine: (ha-931571)     <apic/>
	I1104 10:52:21.861380   37715 main.go:141] libmachine: (ha-931571)     <pae/>
	I1104 10:52:21.861396   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861404   37715 main.go:141] libmachine: (ha-931571)   </features>
	I1104 10:52:21.861416   37715 main.go:141] libmachine: (ha-931571)   <cpu mode='host-passthrough'>
	I1104 10:52:21.861423   37715 main.go:141] libmachine: (ha-931571)   
	I1104 10:52:21.861426   37715 main.go:141] libmachine: (ha-931571)   </cpu>
	I1104 10:52:21.861433   37715 main.go:141] libmachine: (ha-931571)   <os>
	I1104 10:52:21.861437   37715 main.go:141] libmachine: (ha-931571)     <type>hvm</type>
	I1104 10:52:21.861444   37715 main.go:141] libmachine: (ha-931571)     <boot dev='cdrom'/>
	I1104 10:52:21.861448   37715 main.go:141] libmachine: (ha-931571)     <boot dev='hd'/>
	I1104 10:52:21.861452   37715 main.go:141] libmachine: (ha-931571)     <bootmenu enable='no'/>
	I1104 10:52:21.861458   37715 main.go:141] libmachine: (ha-931571)   </os>
	I1104 10:52:21.861462   37715 main.go:141] libmachine: (ha-931571)   <devices>
	I1104 10:52:21.861469   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='cdrom'>
	I1104 10:52:21.861476   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/boot2docker.iso'/>
	I1104 10:52:21.861488   37715 main.go:141] libmachine: (ha-931571)       <target dev='hdc' bus='scsi'/>
	I1104 10:52:21.861492   37715 main.go:141] libmachine: (ha-931571)       <readonly/>
	I1104 10:52:21.861495   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861500   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='disk'>
	I1104 10:52:21.861506   37715 main.go:141] libmachine: (ha-931571)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:52:21.861513   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk'/>
	I1104 10:52:21.861520   37715 main.go:141] libmachine: (ha-931571)       <target dev='hda' bus='virtio'/>
	I1104 10:52:21.861524   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861533   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861538   37715 main.go:141] libmachine: (ha-931571)       <source network='mk-ha-931571'/>
	I1104 10:52:21.861547   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861557   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861566   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861571   37715 main.go:141] libmachine: (ha-931571)       <source network='default'/>
	I1104 10:52:21.861580   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861584   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861591   37715 main.go:141] libmachine: (ha-931571)     <serial type='pty'>
	I1104 10:52:21.861645   37715 main.go:141] libmachine: (ha-931571)       <target port='0'/>
	I1104 10:52:21.861685   37715 main.go:141] libmachine: (ha-931571)     </serial>
	I1104 10:52:21.861703   37715 main.go:141] libmachine: (ha-931571)     <console type='pty'>
	I1104 10:52:21.861714   37715 main.go:141] libmachine: (ha-931571)       <target type='serial' port='0'/>
	I1104 10:52:21.861735   37715 main.go:141] libmachine: (ha-931571)     </console>
	I1104 10:52:21.861744   37715 main.go:141] libmachine: (ha-931571)     <rng model='virtio'>
	I1104 10:52:21.861753   37715 main.go:141] libmachine: (ha-931571)       <backend model='random'>/dev/random</backend>
	I1104 10:52:21.861765   37715 main.go:141] libmachine: (ha-931571)     </rng>
	I1104 10:52:21.861773   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861783   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861791   37715 main.go:141] libmachine: (ha-931571)   </devices>
	I1104 10:52:21.861799   37715 main.go:141] libmachine: (ha-931571) </domain>
	I1104 10:52:21.861809   37715 main.go:141] libmachine: (ha-931571) 
	I1104 10:52:21.865935   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:cf:c5:1d in network default
	I1104 10:52:21.866504   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:21.866522   37715 main.go:141] libmachine: (ha-931571) Ensuring networks are active...
	I1104 10:52:21.866948   37715 main.go:141] libmachine: (ha-931571) Ensuring network default is active
	I1104 10:52:21.867232   37715 main.go:141] libmachine: (ha-931571) Ensuring network mk-ha-931571 is active
	I1104 10:52:21.867627   37715 main.go:141] libmachine: (ha-931571) Getting domain xml...
	I1104 10:52:21.868256   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:23.049161   37715 main.go:141] libmachine: (ha-931571) Waiting to get IP...
	I1104 10:52:23.050233   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.050623   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.050643   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.050602   37738 retry.go:31] will retry after 245.530574ms: waiting for machine to come up
	I1104 10:52:23.298185   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.298678   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.298704   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.298589   37738 retry.go:31] will retry after 317.376406ms: waiting for machine to come up
	I1104 10:52:23.617020   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.617577   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.617605   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.617514   37738 retry.go:31] will retry after 370.038267ms: waiting for machine to come up
	I1104 10:52:23.988831   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.989190   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.989220   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.989148   37738 retry.go:31] will retry after 538.152632ms: waiting for machine to come up
	I1104 10:52:24.528804   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:24.529210   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:24.529252   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:24.529162   37738 retry.go:31] will retry after 731.07349ms: waiting for machine to come up
	I1104 10:52:25.262048   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:25.262502   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:25.262519   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:25.262462   37738 retry.go:31] will retry after 741.011273ms: waiting for machine to come up
	I1104 10:52:26.005553   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.005942   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.005976   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.005909   37738 retry.go:31] will retry after 743.777795ms: waiting for machine to come up
	I1104 10:52:26.751254   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.751560   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.751581   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.751519   37738 retry.go:31] will retry after 895.955115ms: waiting for machine to come up
	I1104 10:52:27.648705   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:27.649070   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:27.649096   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:27.649040   37738 retry.go:31] will retry after 1.225419017s: waiting for machine to come up
	I1104 10:52:28.876413   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:28.876806   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:28.876829   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:28.876782   37738 retry.go:31] will retry after 1.631823926s: waiting for machine to come up
	I1104 10:52:30.510636   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:30.511147   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:30.511177   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:30.511093   37738 retry.go:31] will retry after 1.798258408s: waiting for machine to come up
	I1104 10:52:32.311067   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:32.311528   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:32.311574   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:32.311491   37738 retry.go:31] will retry after 3.573429436s: waiting for machine to come up
	I1104 10:52:35.889088   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:35.889552   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:35.889578   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:35.889516   37738 retry.go:31] will retry after 4.488251667s: waiting for machine to come up
	I1104 10:52:40.382173   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382599   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has current primary IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382621   37715 main.go:141] libmachine: (ha-931571) Found IP for machine: 192.168.39.67
	I1104 10:52:40.382633   37715 main.go:141] libmachine: (ha-931571) Reserving static IP address...
	I1104 10:52:40.383033   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find host DHCP lease matching {name: "ha-931571", mac: "52:54:00:2c:cb:16", ip: "192.168.39.67"} in network mk-ha-931571
	I1104 10:52:40.452346   37715 main.go:141] libmachine: (ha-931571) DBG | Getting to WaitForSSH function...
	I1104 10:52:40.452379   37715 main.go:141] libmachine: (ha-931571) Reserved static IP address: 192.168.39.67
	I1104 10:52:40.452392   37715 main.go:141] libmachine: (ha-931571) Waiting for SSH to be available...
	I1104 10:52:40.456018   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456490   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.456515   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456627   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH client type: external
	I1104 10:52:40.456650   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa (-rw-------)
	I1104 10:52:40.456681   37715 main.go:141] libmachine: (ha-931571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:52:40.456700   37715 main.go:141] libmachine: (ha-931571) DBG | About to run SSH command:
	I1104 10:52:40.456715   37715 main.go:141] libmachine: (ha-931571) DBG | exit 0
	I1104 10:52:40.580862   37715 main.go:141] libmachine: (ha-931571) DBG | SSH cmd err, output: <nil>: 
	I1104 10:52:40.581146   37715 main.go:141] libmachine: (ha-931571) KVM machine creation complete!
	I1104 10:52:40.581410   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:40.581936   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582130   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582294   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:52:40.582307   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:52:40.583398   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:52:40.583412   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:52:40.583418   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:52:40.583425   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.585558   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585865   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.585891   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585991   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.586130   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586272   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586383   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.586519   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.586723   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.586734   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:52:40.692229   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:40.692248   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:52:40.692257   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.695010   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695388   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.695411   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695556   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.695751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.695899   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.696052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.696188   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.696868   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.696890   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:52:40.801468   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:52:40.801552   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:52:40.801563   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:52:40.801571   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801814   37715 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 10:52:40.801836   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801992   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.804318   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804694   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.804723   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804889   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.805051   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805262   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805439   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.805644   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.805826   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.805838   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 10:52:40.921516   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 10:52:40.921540   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.924174   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.924541   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924675   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.924825   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.924941   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.925052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.925210   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.925423   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.925448   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:52:41.036770   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:41.036799   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:52:41.036830   37715 buildroot.go:174] setting up certificates
	I1104 10:52:41.036839   37715 provision.go:84] configureAuth start
	I1104 10:52:41.036848   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:41.037164   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.039662   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.040032   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040164   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.042288   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042624   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.042652   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042756   37715 provision.go:143] copyHostCerts
	I1104 10:52:41.042779   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042808   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:52:41.042823   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042880   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:52:41.042955   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.042972   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:52:41.042979   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.043001   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:52:41.043042   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043058   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:52:41.043064   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043084   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:52:41.043133   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 10:52:41.275942   37715 provision.go:177] copyRemoteCerts
	I1104 10:52:41.275998   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:52:41.276018   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.278984   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279300   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.279324   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279438   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.279611   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.279754   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.279862   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.362606   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:52:41.362673   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:52:41.384103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:52:41.384170   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 10:52:41.405170   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:52:41.405259   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:52:41.426285   37715 provision.go:87] duration metric: took 389.43394ms to configureAuth
	I1104 10:52:41.426311   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:52:41.426499   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:52:41.426580   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.429219   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.429539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.429959   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430107   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430247   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.430417   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.430644   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.430666   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:52:41.649262   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:52:41.649291   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:52:41.649300   37715 main.go:141] libmachine: (ha-931571) Calling .GetURL
	I1104 10:52:41.650723   37715 main.go:141] libmachine: (ha-931571) DBG | Using libvirt version 6000000
	I1104 10:52:41.653499   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.653913   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.653943   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.654070   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:52:41.654084   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:52:41.654091   37715 client.go:171] duration metric: took 20.198612513s to LocalClient.Create
	I1104 10:52:41.654124   37715 start.go:167] duration metric: took 20.198697894s to libmachine.API.Create "ha-931571"
	I1104 10:52:41.654168   37715 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 10:52:41.654182   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:52:41.654199   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.654448   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:52:41.654477   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.656689   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.657028   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657279   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.657484   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.657648   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.657776   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.738934   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:52:41.742902   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:52:41.742925   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:52:41.742997   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:52:41.743084   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:52:41.743095   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:52:41.743212   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:52:41.752124   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:41.774335   37715 start.go:296] duration metric: took 120.149038ms for postStartSetup
	I1104 10:52:41.774411   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:41.775008   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.777422   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.777754   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.777776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.778012   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:41.778186   37715 start.go:128] duration metric: took 20.340838176s to createHost
	I1104 10:52:41.778221   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.780525   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780784   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.780805   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780933   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.781101   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781264   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781386   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.781512   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.781672   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.781683   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:52:41.885593   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717561.859087710
	
	I1104 10:52:41.885616   37715 fix.go:216] guest clock: 1730717561.859087710
	I1104 10:52:41.885624   37715 fix.go:229] Guest: 2024-11-04 10:52:41.85908771 +0000 UTC Remote: 2024-11-04 10:52:41.778208592 +0000 UTC m=+20.449726833 (delta=80.879118ms)
	I1104 10:52:41.885647   37715 fix.go:200] guest clock delta is within tolerance: 80.879118ms
	I1104 10:52:41.885653   37715 start.go:83] releasing machines lock for "ha-931571", held for 20.448400301s
	I1104 10:52:41.885675   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.885953   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.888489   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.888887   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.888909   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.889131   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889647   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889819   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889899   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:52:41.889945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.890032   37715 ssh_runner.go:195] Run: cat /version.json
	I1104 10:52:41.890047   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.892621   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893038   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893065   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893082   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893208   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893350   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.893498   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.893582   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893589   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.893613   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893793   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893936   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.894105   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.894263   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.988130   37715 ssh_runner.go:195] Run: systemctl --version
	I1104 10:52:41.993656   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:52:42.142615   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:52:42.148950   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:52:42.149023   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:52:42.163368   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:52:42.163399   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:52:42.163459   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:52:42.178011   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:52:42.190311   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:52:42.190363   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:52:42.202494   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:52:42.215234   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:52:42.322933   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:52:42.465367   37715 docker.go:233] disabling docker service ...
	I1104 10:52:42.465435   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:52:42.478799   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:52:42.490748   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:52:42.621810   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:52:42.721588   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:52:42.734181   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:52:42.750278   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:52:42.750346   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.759509   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:52:42.759569   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.768912   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.778275   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.791011   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:52:42.801155   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.810365   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.825204   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.834333   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:52:42.842438   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:52:42.842479   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:52:42.853336   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:52:42.861893   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:42.966759   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:52:43.051148   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:52:43.051245   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:52:43.055605   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:52:43.055660   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:52:43.058970   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:52:43.092206   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:52:43.092300   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.119216   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.149822   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:52:43.150920   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:43.153539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.153876   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:43.153903   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.154148   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:52:43.157775   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:43.169819   37715 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 10:52:43.169924   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:43.169983   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:43.198885   37715 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 10:52:43.198949   37715 ssh_runner.go:195] Run: which lz4
	I1104 10:52:43.202346   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1104 10:52:43.202439   37715 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 10:52:43.206081   37715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 10:52:43.206107   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 10:52:44.348916   37715 crio.go:462] duration metric: took 1.146501805s to copy over tarball
	I1104 10:52:44.348982   37715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 10:52:46.326500   37715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97746722s)
	I1104 10:52:46.326527   37715 crio.go:469] duration metric: took 1.977583171s to extract the tarball
	I1104 10:52:46.326535   37715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 10:52:46.361867   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:46.402887   37715 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 10:52:46.402909   37715 cache_images.go:84] Images are preloaded, skipping loading
	I1104 10:52:46.402919   37715 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 10:52:46.403024   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:52:46.403102   37715 ssh_runner.go:195] Run: crio config
	I1104 10:52:46.448114   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:46.448134   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:46.448143   37715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 10:52:46.448161   37715 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 10:52:46.448276   37715 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 10:52:46.448297   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:52:46.448333   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:52:46.464928   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:52:46.465022   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:52:46.465069   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:52:46.473864   37715 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 10:52:46.473931   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 10:52:46.482366   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 10:52:46.497386   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:52:46.512146   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 10:52:46.528415   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1104 10:52:46.544798   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:52:46.548212   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:46.559488   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:46.692494   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:52:46.708806   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 10:52:46.708830   37715 certs.go:194] generating shared ca certs ...
	I1104 10:52:46.708849   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.709027   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:52:46.709089   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:52:46.709102   37715 certs.go:256] generating profile certs ...
	I1104 10:52:46.709156   37715 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:52:46.709175   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt with IP's: []
	I1104 10:52:46.835505   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt ...
	I1104 10:52:46.835534   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt: {Name:mk61f73d1cdbaea56c4e3a41bf4d8a8e998c4601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835713   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key ...
	I1104 10:52:46.835728   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key: {Name:mk3a1e70b98b06ffcf80cad3978790ca4b634404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835832   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66
	I1104 10:52:46.835851   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.254]
	I1104 10:52:46.955700   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 ...
	I1104 10:52:46.955730   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66: {Name:mk7e52761b5f3a6915e1cf90cd8ace0ff40a1698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.955903   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 ...
	I1104 10:52:46.955919   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66: {Name:mk473e5ea437641c8d6be7c8c672068a3ffc879a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.956011   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:52:46.956221   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:52:46.956356   37715 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:52:46.956379   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt with IP's: []
	I1104 10:52:47.101236   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt ...
	I1104 10:52:47.101269   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt: {Name:mk407ac3d668cf899822db436da4d41618f60b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101451   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key ...
	I1104 10:52:47.101466   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key: {Name:mk67291900fae9d34a6dbb5f9ac6f9eff95090cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101560   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:52:47.101583   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:52:47.101600   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:52:47.101617   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:52:47.101636   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:52:47.101656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:52:47.101675   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:52:47.101692   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:52:47.101753   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:52:47.101799   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:52:47.101812   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:52:47.101846   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:52:47.101884   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:52:47.101916   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:52:47.101975   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:47.102014   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.102035   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.102054   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.102621   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:52:47.126053   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:52:47.148030   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:52:47.169097   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:52:47.190790   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 10:52:47.211485   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 10:52:47.233064   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:52:47.254438   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:52:47.275584   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:52:47.296496   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:52:47.316993   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:52:47.338085   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 10:52:47.352830   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:52:47.357992   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:52:47.367171   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371139   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371175   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.376056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:52:47.385217   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:52:47.394305   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398184   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398229   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.403221   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:52:47.412407   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:52:47.421725   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425673   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425724   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.430774   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:52:47.442891   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:52:47.448916   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:52:47.448963   37715 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:47.449026   37715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 10:52:47.449081   37715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 10:52:47.493313   37715 cri.go:89] found id: ""
	I1104 10:52:47.493388   37715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 10:52:47.505853   37715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 10:52:47.514358   37715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 10:52:47.522614   37715 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 10:52:47.522633   37715 kubeadm.go:157] found existing configuration files:
	
	I1104 10:52:47.522685   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 10:52:47.530458   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 10:52:47.530497   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 10:52:47.538766   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 10:52:47.546614   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 10:52:47.546656   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 10:52:47.554873   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.562800   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 10:52:47.562860   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.571095   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 10:52:47.578946   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 10:52:47.578986   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 10:52:47.587002   37715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 10:52:47.774250   37715 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 10:52:59.162857   37715 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1104 10:52:59.162909   37715 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 10:52:59.162992   37715 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 10:52:59.163126   37715 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 10:52:59.163235   37715 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1104 10:52:59.163321   37715 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 10:52:59.164884   37715 out.go:235]   - Generating certificates and keys ...
	I1104 10:52:59.164965   37715 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 10:52:59.165051   37715 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 10:52:59.165154   37715 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 10:52:59.165262   37715 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 10:52:59.165355   37715 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 10:52:59.165433   37715 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 10:52:59.165512   37715 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 10:52:59.165644   37715 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165719   37715 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 10:52:59.165854   37715 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165939   37715 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 10:52:59.166039   37715 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 10:52:59.166120   37715 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 10:52:59.166198   37715 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 10:52:59.166277   37715 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 10:52:59.166352   37715 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1104 10:52:59.166437   37715 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 10:52:59.166524   37715 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 10:52:59.166602   37715 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 10:52:59.166715   37715 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 10:52:59.166813   37715 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 10:52:59.168314   37715 out.go:235]   - Booting up control plane ...
	I1104 10:52:59.168430   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 10:52:59.168528   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 10:52:59.168619   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 10:52:59.168745   37715 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 10:52:59.168864   37715 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 10:52:59.168907   37715 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 10:52:59.169020   37715 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1104 10:52:59.169142   37715 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1104 10:52:59.169244   37715 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501850183s
	I1104 10:52:59.169346   37715 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1104 10:52:59.169435   37715 kubeadm.go:310] [api-check] The API server is healthy after 5.721436597s
	I1104 10:52:59.169568   37715 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1104 10:52:59.169699   37715 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1104 10:52:59.169786   37715 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1104 10:52:59.169979   37715 kubeadm.go:310] [mark-control-plane] Marking the node ha-931571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1104 10:52:59.170060   37715 kubeadm.go:310] [bootstrap-token] Using token: x3krps.xtycqe6w7psx61o7
	I1104 10:52:59.171278   37715 out.go:235]   - Configuring RBAC rules ...
	I1104 10:52:59.171366   37715 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1104 10:52:59.171442   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1104 10:52:59.171566   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1104 10:52:59.171689   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1104 10:52:59.171828   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1104 10:52:59.171935   37715 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1104 10:52:59.172086   37715 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1104 10:52:59.172158   37715 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1104 10:52:59.172220   37715 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1104 10:52:59.172232   37715 kubeadm.go:310] 
	I1104 10:52:59.172322   37715 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1104 10:52:59.172332   37715 kubeadm.go:310] 
	I1104 10:52:59.172461   37715 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1104 10:52:59.172471   37715 kubeadm.go:310] 
	I1104 10:52:59.172512   37715 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1104 10:52:59.172591   37715 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1104 10:52:59.172657   37715 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1104 10:52:59.172671   37715 kubeadm.go:310] 
	I1104 10:52:59.172727   37715 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1104 10:52:59.172733   37715 kubeadm.go:310] 
	I1104 10:52:59.172772   37715 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1104 10:52:59.172780   37715 kubeadm.go:310] 
	I1104 10:52:59.172823   37715 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1104 10:52:59.172919   37715 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1104 10:52:59.173013   37715 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1104 10:52:59.173027   37715 kubeadm.go:310] 
	I1104 10:52:59.173126   37715 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1104 10:52:59.173242   37715 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1104 10:52:59.173250   37715 kubeadm.go:310] 
	I1104 10:52:59.173349   37715 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173475   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 \
	I1104 10:52:59.173512   37715 kubeadm.go:310] 	--control-plane 
	I1104 10:52:59.173521   37715 kubeadm.go:310] 
	I1104 10:52:59.173615   37715 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1104 10:52:59.173622   37715 kubeadm.go:310] 
	I1104 10:52:59.173728   37715 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173851   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 
	I1104 10:52:59.173864   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:59.173870   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:59.175270   37715 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1104 10:52:59.176515   37715 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1104 10:52:59.181311   37715 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1104 10:52:59.181330   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1104 10:52:59.199374   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1104 10:52:59.595605   37715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 10:52:59.595735   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:52:59.595746   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571 minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=true
	I1104 10:52:59.607016   37715 ops.go:34] apiserver oom_adj: -16
	I1104 10:52:59.726325   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.227237   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.727360   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.226637   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.727035   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.226405   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.727470   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.227029   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.337760   37715 kubeadm.go:1113] duration metric: took 3.742086638s to wait for elevateKubeSystemPrivileges
	I1104 10:53:03.337799   37715 kubeadm.go:394] duration metric: took 15.888837987s to StartCluster
	I1104 10:53:03.337821   37715 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.337905   37715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.338737   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.338982   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1104 10:53:03.338988   37715 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:03.339014   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:53:03.339062   37715 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 10:53:03.339167   37715 addons.go:69] Setting default-storageclass=true in profile "ha-931571"
	I1104 10:53:03.339173   37715 addons.go:69] Setting storage-provisioner=true in profile "ha-931571"
	I1104 10:53:03.339185   37715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-931571"
	I1104 10:53:03.339200   37715 addons.go:234] Setting addon storage-provisioner=true in "ha-931571"
	I1104 10:53:03.339229   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:03.339239   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.339632   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339672   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.339677   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339713   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.360893   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I1104 10:53:03.360926   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1104 10:53:03.361436   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361473   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361990   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362007   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362132   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362158   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362362   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362495   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362668   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.362891   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.362932   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.365045   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.365435   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 10:53:03.365987   37715 cert_rotation.go:140] Starting client certificate rotation controller
	I1104 10:53:03.366272   37715 addons.go:234] Setting addon default-storageclass=true in "ha-931571"
	I1104 10:53:03.366318   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.366699   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.366738   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.381218   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I1104 10:53:03.381322   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I1104 10:53:03.381713   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.381719   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.382205   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382227   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382357   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382372   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382534   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383016   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.383048   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.383535   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383708   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.385592   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.387622   37715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 10:53:03.388963   37715 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.388985   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 10:53:03.389004   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.392017   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392435   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.392480   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392570   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.392752   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.392874   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.393020   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.398269   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I1104 10:53:03.398748   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.399262   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.399294   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.399614   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.399786   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.401287   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.401486   37715 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.401502   37715 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 10:53:03.401529   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.404218   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404573   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.404595   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404677   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.404848   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.404981   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.405135   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.489842   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1104 10:53:03.554612   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.583845   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.952361   37715 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1104 10:53:03.952436   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952460   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952742   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952762   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.952762   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:03.952772   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952781   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952966   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952981   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.953045   37715 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1104 10:53:03.953065   37715 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1104 10:53:03.953164   37715 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1104 10:53:03.953175   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.953187   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.953195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.960797   37715 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1104 10:53:03.961342   37715 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1104 10:53:03.961355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.961363   37715 round_trippers.go:473]     Content-Type: application/json
	I1104 10:53:03.961367   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.961369   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.963493   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:03.963694   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.963715   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.964004   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.964021   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.964021   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.222705   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.222735   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223063   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.223090   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223120   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.223137   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.223149   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223361   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223375   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.225261   37715 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1104 10:53:04.226730   37715 addons.go:510] duration metric: took 887.697522ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1104 10:53:04.226762   37715 start.go:246] waiting for cluster config update ...
	I1104 10:53:04.226778   37715 start.go:255] writing updated cluster config ...
	I1104 10:53:04.228532   37715 out.go:201] 
	I1104 10:53:04.229911   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:04.229982   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.231623   37715 out.go:177] * Starting "ha-931571-m02" control-plane node in "ha-931571" cluster
	I1104 10:53:04.233345   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:53:04.233368   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:53:04.233465   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:53:04.233476   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:53:04.233547   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.233880   37715 start.go:360] acquireMachinesLock for ha-931571-m02: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:53:04.233922   37715 start.go:364] duration metric: took 22.549µs to acquireMachinesLock for "ha-931571-m02"
	I1104 10:53:04.233935   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:04.234001   37715 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1104 10:53:04.235719   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:53:04.235815   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:04.235858   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:04.250864   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I1104 10:53:04.251327   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:04.251891   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:04.251920   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:04.252265   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:04.252475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:04.252609   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:04.252797   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:53:04.252829   37715 client.go:168] LocalClient.Create starting
	I1104 10:53:04.252866   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:53:04.252907   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.252928   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.252995   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:53:04.253023   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.253038   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.253066   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:53:04.253077   37715 main.go:141] libmachine: (ha-931571-m02) Calling .PreCreateCheck
	I1104 10:53:04.253220   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:04.253654   37715 main.go:141] libmachine: Creating machine...
	I1104 10:53:04.253672   37715 main.go:141] libmachine: (ha-931571-m02) Calling .Create
	I1104 10:53:04.253800   37715 main.go:141] libmachine: (ha-931571-m02) Creating KVM machine...
	I1104 10:53:04.254992   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing default KVM network
	I1104 10:53:04.255150   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing private KVM network mk-ha-931571
	I1104 10:53:04.255299   37715 main.go:141] libmachine: (ha-931571-m02) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.255322   37715 main.go:141] libmachine: (ha-931571-m02) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:53:04.255385   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.255280   38069 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.255479   37715 main.go:141] libmachine: (ha-931571-m02) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:53:04.500647   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.500534   38069 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa...
	I1104 10:53:04.797066   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.796939   38069 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk...
	I1104 10:53:04.797094   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing magic tar header
	I1104 10:53:04.797104   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing SSH key tar header
	I1104 10:53:04.797111   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.797059   38069 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.797220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02
	I1104 10:53:04.797261   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 (perms=drwx------)
	I1104 10:53:04.797271   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:53:04.797289   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.797298   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:53:04.797310   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:53:04.797318   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:53:04.797331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home
	I1104 10:53:04.797349   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:53:04.797357   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Skipping /home - not owner
	I1104 10:53:04.797376   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:53:04.797389   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:53:04.797401   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:53:04.797412   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:53:04.797440   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:04.798407   37715 main.go:141] libmachine: (ha-931571-m02) define libvirt domain using xml: 
	I1104 10:53:04.798425   37715 main.go:141] libmachine: (ha-931571-m02) <domain type='kvm'>
	I1104 10:53:04.798436   37715 main.go:141] libmachine: (ha-931571-m02)   <name>ha-931571-m02</name>
	I1104 10:53:04.798449   37715 main.go:141] libmachine: (ha-931571-m02)   <memory unit='MiB'>2200</memory>
	I1104 10:53:04.798465   37715 main.go:141] libmachine: (ha-931571-m02)   <vcpu>2</vcpu>
	I1104 10:53:04.798472   37715 main.go:141] libmachine: (ha-931571-m02)   <features>
	I1104 10:53:04.798477   37715 main.go:141] libmachine: (ha-931571-m02)     <acpi/>
	I1104 10:53:04.798481   37715 main.go:141] libmachine: (ha-931571-m02)     <apic/>
	I1104 10:53:04.798486   37715 main.go:141] libmachine: (ha-931571-m02)     <pae/>
	I1104 10:53:04.798492   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798498   37715 main.go:141] libmachine: (ha-931571-m02)   </features>
	I1104 10:53:04.798502   37715 main.go:141] libmachine: (ha-931571-m02)   <cpu mode='host-passthrough'>
	I1104 10:53:04.798507   37715 main.go:141] libmachine: (ha-931571-m02)   
	I1104 10:53:04.798512   37715 main.go:141] libmachine: (ha-931571-m02)   </cpu>
	I1104 10:53:04.798522   37715 main.go:141] libmachine: (ha-931571-m02)   <os>
	I1104 10:53:04.798534   37715 main.go:141] libmachine: (ha-931571-m02)     <type>hvm</type>
	I1104 10:53:04.798546   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='cdrom'/>
	I1104 10:53:04.798552   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='hd'/>
	I1104 10:53:04.798564   37715 main.go:141] libmachine: (ha-931571-m02)     <bootmenu enable='no'/>
	I1104 10:53:04.798571   37715 main.go:141] libmachine: (ha-931571-m02)   </os>
	I1104 10:53:04.798580   37715 main.go:141] libmachine: (ha-931571-m02)   <devices>
	I1104 10:53:04.798585   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='cdrom'>
	I1104 10:53:04.798596   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/boot2docker.iso'/>
	I1104 10:53:04.798601   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hdc' bus='scsi'/>
	I1104 10:53:04.798630   37715 main.go:141] libmachine: (ha-931571-m02)       <readonly/>
	I1104 10:53:04.798653   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798678   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='disk'>
	I1104 10:53:04.798702   37715 main.go:141] libmachine: (ha-931571-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:53:04.798718   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk'/>
	I1104 10:53:04.798732   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hda' bus='virtio'/>
	I1104 10:53:04.798747   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798763   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798783   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='mk-ha-931571'/>
	I1104 10:53:04.798799   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798811   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798822   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798835   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='default'/>
	I1104 10:53:04.798846   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798858   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798868   37715 main.go:141] libmachine: (ha-931571-m02)     <serial type='pty'>
	I1104 10:53:04.798881   37715 main.go:141] libmachine: (ha-931571-m02)       <target port='0'/>
	I1104 10:53:04.798892   37715 main.go:141] libmachine: (ha-931571-m02)     </serial>
	I1104 10:53:04.798901   37715 main.go:141] libmachine: (ha-931571-m02)     <console type='pty'>
	I1104 10:53:04.798910   37715 main.go:141] libmachine: (ha-931571-m02)       <target type='serial' port='0'/>
	I1104 10:53:04.798916   37715 main.go:141] libmachine: (ha-931571-m02)     </console>
	I1104 10:53:04.798925   37715 main.go:141] libmachine: (ha-931571-m02)     <rng model='virtio'>
	I1104 10:53:04.798938   37715 main.go:141] libmachine: (ha-931571-m02)       <backend model='random'>/dev/random</backend>
	I1104 10:53:04.798948   37715 main.go:141] libmachine: (ha-931571-m02)     </rng>
	I1104 10:53:04.798958   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798967   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798977   37715 main.go:141] libmachine: (ha-931571-m02)   </devices>
	I1104 10:53:04.798990   37715 main.go:141] libmachine: (ha-931571-m02) </domain>
	I1104 10:53:04.799001   37715 main.go:141] libmachine: (ha-931571-m02) 
	I1104 10:53:04.805977   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5e:b4:47 in network default
	I1104 10:53:04.806519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:04.806536   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring networks are active...
	I1104 10:53:04.807291   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network default is active
	I1104 10:53:04.807614   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network mk-ha-931571 is active
	I1104 10:53:04.807998   37715 main.go:141] libmachine: (ha-931571-m02) Getting domain xml...
	I1104 10:53:04.808751   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:06.037689   37715 main.go:141] libmachine: (ha-931571-m02) Waiting to get IP...
	I1104 10:53:06.038416   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.038827   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.038856   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.038804   38069 retry.go:31] will retry after 244.727015ms: waiting for machine to come up
	I1104 10:53:06.285395   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.285853   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.285879   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.285815   38069 retry.go:31] will retry after 291.944786ms: waiting for machine to come up
	I1104 10:53:06.579413   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.579939   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.579964   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.579896   38069 retry.go:31] will retry after 446.911163ms: waiting for machine to come up
	I1104 10:53:07.028452   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.028838   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.028870   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.028792   38069 retry.go:31] will retry after 472.390697ms: waiting for machine to come up
	I1104 10:53:07.502204   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.502568   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.502592   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.502526   38069 retry.go:31] will retry after 662.15145ms: waiting for machine to come up
	I1104 10:53:08.166152   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:08.166583   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:08.166609   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:08.166538   38069 retry.go:31] will retry after 886.374206ms: waiting for machine to come up
	I1104 10:53:09.054240   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:09.054689   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:09.054715   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:09.054670   38069 retry.go:31] will retry after 963.475989ms: waiting for machine to come up
	I1104 10:53:10.020142   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:10.020587   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:10.020630   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:10.020571   38069 retry.go:31] will retry after 1.332433034s: waiting for machine to come up
	I1104 10:53:11.354908   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:11.355309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:11.355331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:11.355273   38069 retry.go:31] will retry after 1.652203867s: waiting for machine to come up
	I1104 10:53:13.009876   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:13.010297   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:13.010319   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:13.010254   38069 retry.go:31] will retry after 2.320402176s: waiting for machine to come up
	I1104 10:53:15.332045   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:15.332414   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:15.332441   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:15.332356   38069 retry.go:31] will retry after 2.652871808s: waiting for machine to come up
	I1104 10:53:17.987774   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:17.988211   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:17.988231   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:17.988174   38069 retry.go:31] will retry after 3.518414185s: waiting for machine to come up
	I1104 10:53:21.508515   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:21.508901   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:21.508926   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:21.508866   38069 retry.go:31] will retry after 4.345855832s: waiting for machine to come up
	I1104 10:53:25.856753   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857143   37715 main.go:141] libmachine: (ha-931571-m02) Found IP for machine: 192.168.39.245
	I1104 10:53:25.857167   37715 main.go:141] libmachine: (ha-931571-m02) Reserving static IP address...
	I1104 10:53:25.857181   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857621   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find host DHCP lease matching {name: "ha-931571-m02", mac: "52:54:00:5c:86:6b", ip: "192.168.39.245"} in network mk-ha-931571
	I1104 10:53:25.931250   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Getting to WaitForSSH function...
	I1104 10:53:25.931278   37715 main.go:141] libmachine: (ha-931571-m02) Reserved static IP address: 192.168.39.245
	I1104 10:53:25.931296   37715 main.go:141] libmachine: (ha-931571-m02) Waiting for SSH to be available...
	I1104 10:53:25.933968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934431   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:25.934489   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934562   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH client type: external
	I1104 10:53:25.934591   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa (-rw-------)
	I1104 10:53:25.934652   37715 main.go:141] libmachine: (ha-931571-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:53:25.934674   37715 main.go:141] libmachine: (ha-931571-m02) DBG | About to run SSH command:
	I1104 10:53:25.934692   37715 main.go:141] libmachine: (ha-931571-m02) DBG | exit 0
	I1104 10:53:26.068913   37715 main.go:141] libmachine: (ha-931571-m02) DBG | SSH cmd err, output: <nil>: 
	I1104 10:53:26.069182   37715 main.go:141] libmachine: (ha-931571-m02) KVM machine creation complete!
	I1104 10:53:26.069569   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:26.070061   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070245   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070421   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:53:26.070438   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 10:53:26.071961   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:53:26.071975   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:53:26.071980   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:53:26.071985   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.074060   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074383   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.074403   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074574   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.074737   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074878   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074976   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.075126   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.075361   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.075377   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:53:26.184350   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.184379   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:53:26.184395   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.186866   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187176   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.187196   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187362   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.187546   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187699   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187825   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.187985   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.188193   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.188204   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:53:26.301614   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:53:26.301685   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:53:26.301699   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:53:26.301711   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.301942   37715 buildroot.go:166] provisioning hostname "ha-931571-m02"
	I1104 10:53:26.301964   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.302139   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.304767   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.305334   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305470   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.305626   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305790   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305931   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.306093   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.306297   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.306310   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m02 && echo "ha-931571-m02" | sudo tee /etc/hostname
	I1104 10:53:26.430814   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m02
	
	I1104 10:53:26.430842   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.433622   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.433925   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.433953   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.434109   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.434330   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434473   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434584   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.434716   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.434907   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.434931   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:53:26.553495   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.553519   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:53:26.553534   37715 buildroot.go:174] setting up certificates
	I1104 10:53:26.553543   37715 provision.go:84] configureAuth start
	I1104 10:53:26.553551   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.553773   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:26.556203   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556500   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.556519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556610   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.558806   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559168   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.559194   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559467   37715 provision.go:143] copyHostCerts
	I1104 10:53:26.559496   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559535   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:53:26.559546   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559623   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:53:26.559707   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559732   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:53:26.559741   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559778   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:53:26.559830   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559853   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:53:26.559865   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559899   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:53:26.559968   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m02 san=[127.0.0.1 192.168.39.245 ha-931571-m02 localhost minikube]
	I1104 10:53:26.827173   37715 provision.go:177] copyRemoteCerts
	I1104 10:53:26.827226   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:53:26.827248   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.829975   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830343   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.830372   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830576   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.830763   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.830912   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.831022   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:26.923318   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:53:26.923390   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:53:26.950708   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:53:26.950773   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:53:26.976975   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:53:26.977045   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 10:53:27.002230   37715 provision.go:87] duration metric: took 448.676469ms to configureAuth
	I1104 10:53:27.002252   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:53:27.002404   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:27.002475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.005273   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005618   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.005646   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005772   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.005978   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006123   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006279   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.006465   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.006627   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.006641   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:53:27.235271   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:53:27.235297   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:53:27.235305   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetURL
	I1104 10:53:27.236550   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using libvirt version 6000000
	I1104 10:53:27.238826   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239189   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.239220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239401   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:53:27.239418   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:53:27.239426   37715 client.go:171] duration metric: took 22.986586779s to LocalClient.Create
	I1104 10:53:27.239451   37715 start.go:167] duration metric: took 22.986656312s to libmachine.API.Create "ha-931571"
	I1104 10:53:27.239472   37715 start.go:293] postStartSetup for "ha-931571-m02" (driver="kvm2")
	I1104 10:53:27.239488   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:53:27.239510   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.239721   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:53:27.239747   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.241968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242332   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.242352   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242491   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.242658   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.242769   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.242872   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.327061   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:53:27.331021   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:53:27.331050   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:53:27.331133   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:53:27.331207   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:53:27.331218   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:53:27.331300   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:53:27.341280   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:27.363737   37715 start.go:296] duration metric: took 124.248011ms for postStartSetup
	I1104 10:53:27.363783   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:27.364431   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.367195   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367660   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.367698   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367926   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:27.368121   37715 start.go:128] duration metric: took 23.134111471s to createHost
	I1104 10:53:27.368147   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.370510   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.370846   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.370881   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.371043   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.371226   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371432   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371573   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.371728   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.371899   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.371912   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:53:27.485557   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717607.449108710
	
	I1104 10:53:27.485578   37715 fix.go:216] guest clock: 1730717607.449108710
	I1104 10:53:27.485585   37715 fix.go:229] Guest: 2024-11-04 10:53:27.44910871 +0000 UTC Remote: 2024-11-04 10:53:27.368133628 +0000 UTC m=+66.039651871 (delta=80.975082ms)
	I1104 10:53:27.485600   37715 fix.go:200] guest clock delta is within tolerance: 80.975082ms
	I1104 10:53:27.485605   37715 start.go:83] releasing machines lock for "ha-931571-m02", held for 23.251676872s
	I1104 10:53:27.485620   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.485857   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.488648   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.489014   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.489041   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.491305   37715 out.go:177] * Found network options:
	I1104 10:53:27.492602   37715 out.go:177]   - NO_PROXY=192.168.39.67
	W1104 10:53:27.493715   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.493752   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494253   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494447   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494556   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:53:27.494595   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	W1104 10:53:27.494597   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.494657   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:53:27.494679   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.497460   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497637   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497850   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.497871   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497991   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.498003   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.498025   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498232   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498254   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498403   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498437   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498538   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.498550   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498773   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.735755   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:53:27.742047   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:53:27.742118   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:53:27.757546   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:53:27.757568   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:53:27.757654   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:53:27.775341   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:53:27.789267   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:53:27.789322   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:53:27.802395   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:53:27.815846   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:53:27.932464   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:53:28.072054   37715 docker.go:233] disabling docker service ...
	I1104 10:53:28.072113   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:53:28.085955   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:53:28.098515   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:53:28.231393   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:53:28.348075   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:53:28.360668   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:53:28.377621   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:53:28.377680   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.387614   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:53:28.387678   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.397527   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.406950   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.416691   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:53:28.426696   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.436536   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.452706   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.462377   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:53:28.471479   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:53:28.471541   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:53:28.484536   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:53:28.493914   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:28.602971   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:53:28.692433   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:53:28.692522   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:53:28.696783   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:53:28.696822   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:53:28.700013   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:53:28.734056   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:53:28.734128   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.760475   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.789783   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:53:28.791233   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:53:28.792582   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:28.795120   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795494   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:28.795520   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795759   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:53:28.799797   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:28.811896   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:53:28.812115   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:28.812360   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.812401   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.826717   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I1104 10:53:28.827181   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.827674   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.827693   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.828004   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.828173   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:28.829698   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:28.829978   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.830013   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.844302   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I1104 10:53:28.844715   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.845157   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.845180   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.845561   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.845729   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:28.845886   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.245
	I1104 10:53:28.845896   37715 certs.go:194] generating shared ca certs ...
	I1104 10:53:28.845908   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.846013   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:53:28.846050   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:53:28.846056   37715 certs.go:256] generating profile certs ...
	I1104 10:53:28.846117   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:53:28.846138   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a
	I1104 10:53:28.846149   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.254]
	I1104 10:53:28.973533   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a ...
	I1104 10:53:28.973558   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a: {Name:mk251fe01c9791f2c1df00673ac1979d7532e3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973716   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a ...
	I1104 10:53:28.973729   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a: {Name:mkef3dc2affbfe3d37549d8d043a12581b7267b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973806   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:53:28.973935   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:53:28.974053   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:53:28.974067   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:53:28.974079   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:53:28.974092   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:53:28.974103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:53:28.974114   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:53:28.974127   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:53:28.974139   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:53:28.974151   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:53:28.974191   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:53:28.974219   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:53:28.974228   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:53:28.974249   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:53:28.974273   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:53:28.974294   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:53:28.974329   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:28.974353   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:53:28.974366   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:53:28.974379   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:28.974408   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:28.977338   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977742   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:28.977776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:28.978138   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:28.978269   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:28.978403   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:29.049594   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:53:29.054655   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:53:29.065445   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:53:29.070822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:53:29.082304   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:53:29.086563   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:53:29.098922   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:53:29.103085   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:53:29.113035   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:53:29.117456   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:53:29.127764   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:53:29.131629   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:53:29.143522   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:53:29.167376   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:53:29.189625   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:53:29.212768   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:53:29.235967   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1104 10:53:29.263247   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:53:29.285302   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:53:29.306703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:53:29.328748   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:53:29.350648   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:53:29.372264   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:53:29.395406   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:53:29.410777   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:53:29.427042   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:53:29.443978   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:53:29.460125   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:53:29.475628   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:53:29.491185   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:53:29.507040   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:53:29.512376   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:53:29.522746   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526894   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526950   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.532557   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:53:29.543248   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:53:29.553302   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557429   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557475   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.562752   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:53:29.573585   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:53:29.583479   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587879   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587928   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.594267   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:53:29.605746   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:53:29.609628   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:53:29.609689   37715 kubeadm.go:934] updating node {m02 192.168.39.245 8443 v1.31.2 crio true true} ...
	I1104 10:53:29.609774   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:53:29.609799   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:53:29.609830   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:53:29.626833   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:53:29.626905   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:53:29.626952   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.636985   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:53:29.637050   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.646235   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:53:29.646266   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.646297   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1104 10:53:29.646318   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1104 10:53:29.646321   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.650548   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:53:29.650575   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:53:30.395926   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.396007   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.400715   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:53:30.400746   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:53:30.426541   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:53:30.447212   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.447328   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.458650   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:53:30.458689   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:53:30.919365   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:53:30.928897   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1104 10:53:30.946677   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:53:30.963726   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:53:30.981653   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:53:30.985571   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:30.998898   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:31.132385   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:31.149804   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:31.150291   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:31.150345   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:31.165094   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I1104 10:53:31.165587   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:31.166163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:31.166186   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:31.166555   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:31.166779   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:31.166958   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:53:31.167051   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:53:31.167067   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:31.169771   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170152   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:31.170182   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170376   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:31.170562   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:31.170687   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:31.170781   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:31.306325   37715 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:31.306377   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I1104 10:53:52.004440   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (20.698039868s)
	I1104 10:53:52.004481   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:53:52.565954   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m02 minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:53:52.722802   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:53:52.847701   37715 start.go:319] duration metric: took 21.680738209s to joinCluster
	I1104 10:53:52.847788   37715 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:52.848131   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:52.849508   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:53:52.850857   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:53.114403   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:53.138620   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:53.138881   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:53:53.138942   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:53:53.139141   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:53:53.139247   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.139257   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.139269   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.139278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.152136   37715 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1104 10:53:53.639369   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.639392   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.639401   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.639405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.643203   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:54.140047   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.140070   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.140084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.147092   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:53:54.639335   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.639355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.639363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.639367   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.642506   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.140245   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.140265   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.140273   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.140277   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.143824   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.144458   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:55.639804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.639830   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.639841   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.639846   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.643096   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:56.140054   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.140078   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.140095   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.142960   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:56.639891   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.639912   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.639923   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.639928   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.642755   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.139690   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.139713   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.139725   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.139730   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.143324   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:57.639441   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.639460   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.639469   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.639473   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.642433   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.642947   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:58.140368   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.140388   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.140399   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.140404   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.144117   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:58.640193   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.640215   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.640223   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.640227   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.643667   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.139304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.139323   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.139331   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.139335   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.142878   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.639323   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.639344   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.639353   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.639357   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.642391   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.140288   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.140323   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.140328   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.143357   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.143948   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:00.639324   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.639348   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.639358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.639365   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.643179   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.140315   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.140337   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.140345   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.140349   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.143491   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.639485   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.639510   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.639517   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.639522   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.642450   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:02.140259   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.140291   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.140299   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.140304   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.143695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:02.144128   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:02.639414   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.639433   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.639442   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.639447   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.642409   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.140294   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.140327   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.140333   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.143301   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.639404   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.639426   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.639437   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.639445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.642367   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.139716   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.139740   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.139750   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.139754   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.143000   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:04.640219   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.640245   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.640256   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.640262   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.643232   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.643667   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:05.140138   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.140162   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.140173   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.140178   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.142993   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:05.639755   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.639775   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.639783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.639802   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.643475   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.139372   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.139394   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.139402   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.139405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.142509   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.639413   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.639442   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.639451   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.639456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.642592   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.139655   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.139684   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.139694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.139699   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.143170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.143728   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:07.640208   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.640228   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.640235   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.640240   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.643154   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.140228   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.140261   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.140273   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.140278   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.142997   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.639828   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.639854   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.639862   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.639866   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.643244   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.140126   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.140153   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.140166   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.140172   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.143278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.143950   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:09.639588   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.639610   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.639618   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.639623   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.642343   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.139875   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.139898   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.139905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.139909   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.143037   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.640013   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.640033   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.640042   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.640045   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.643833   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.644423   37715 node_ready.go:49] node "ha-931571-m02" has status "Ready":"True"
	I1104 10:54:10.644446   37715 node_ready.go:38] duration metric: took 17.505281339s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:54:10.644459   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:10.644564   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:10.644577   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.644587   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.644591   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.649476   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:10.656031   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.656110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:54:10.656129   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.656138   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.656144   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.659282   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.659928   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.659944   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.659953   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.659958   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.662844   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.663378   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.663402   37715 pod_ready.go:82] duration metric: took 7.344091ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663423   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663492   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:54:10.663502   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.663512   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.663521   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.666287   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.666934   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.666950   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.666957   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.666960   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.669169   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.669739   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.669760   37715 pod_ready.go:82] duration metric: took 6.3295ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669770   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669830   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:54:10.669842   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.669852   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.669859   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.672042   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.672626   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.672642   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.672650   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.672653   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.674766   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.675295   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.675317   37715 pod_ready.go:82] duration metric: took 5.539368ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675329   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675390   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:54:10.675398   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.675405   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.675410   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.677591   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.678184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.678197   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.678204   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.678208   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.680155   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:54:10.680700   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.680721   37715 pod_ready.go:82] duration metric: took 5.381074ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.680737   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.840055   37715 request.go:632] Waited for 159.25235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840140   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840150   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.840160   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.840171   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.843356   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.040534   37715 request.go:632] Waited for 196.430173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040604   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040615   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.040623   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.040630   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.043768   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.044382   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.044403   37715 pod_ready.go:82] duration metric: took 363.65714ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.044412   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.240746   37715 request.go:632] Waited for 196.265081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240800   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240805   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.240812   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.240823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.244055   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.441020   37715 request.go:632] Waited for 196.31895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441076   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441082   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.441089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.441092   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.443940   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.444396   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.444417   37715 pod_ready.go:82] duration metric: took 399.997294ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.444431   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.640978   37715 request.go:632] Waited for 196.455451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641045   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641052   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.641063   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.641068   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.644104   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.840124   37715 request.go:632] Waited for 195.279381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840180   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.840189   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.840204   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.843139   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.843784   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.843806   37715 pod_ready.go:82] duration metric: took 399.367004ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.843816   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.040826   37715 request.go:632] Waited for 196.934959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040888   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040896   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.040905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.040912   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.044321   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.240220   37715 request.go:632] Waited for 195.323321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240302   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.240311   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.240340   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.243972   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.244423   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.244441   37715 pod_ready.go:82] duration metric: took 400.61624ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.244452   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.440627   37715 request.go:632] Waited for 196.096769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440687   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440692   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.440700   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.440704   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.443759   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.640675   37715 request.go:632] Waited for 196.368451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640746   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640753   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.640764   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.640771   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.645533   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:12.646078   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.646098   37715 pod_ready.go:82] duration metric: took 401.639494ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.646111   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.840342   37715 request.go:632] Waited for 194.16235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840395   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840400   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.840407   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.840413   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.844505   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:13.040627   37715 request.go:632] Waited for 195.405277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040697   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040706   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.040713   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.040717   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.043654   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.044440   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.044461   37715 pod_ready.go:82] duration metric: took 398.343689ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.044472   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.240500   37715 request.go:632] Waited for 195.966375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240580   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240589   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.240599   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.240606   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.243607   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.440419   37715 request.go:632] Waited for 196.059783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440489   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440495   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.440502   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.440507   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.443953   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.444535   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.444560   37715 pod_ready.go:82] duration metric: took 400.080635ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.444575   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.640646   37715 request.go:632] Waited for 195.95641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640702   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640707   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.640716   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.640720   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.644170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.840111   37715 request.go:632] Waited for 195.309512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840189   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.840197   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.840205   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.843622   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.844295   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.844319   37715 pod_ready.go:82] duration metric: took 399.734957ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.844333   37715 pod_ready.go:39] duration metric: took 3.199846594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:13.844350   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:54:13.844417   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:54:13.858847   37715 api_server.go:72] duration metric: took 21.011018077s to wait for apiserver process to appear ...
	I1104 10:54:13.858869   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:54:13.858890   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:54:13.863051   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:54:13.863110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:54:13.863115   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.863122   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.863126   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.864098   37715 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1104 10:54:13.864181   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:54:13.864195   37715 api_server.go:131] duration metric: took 5.319439ms to wait for apiserver health ...
	I1104 10:54:13.864202   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:54:14.040623   37715 request.go:632] Waited for 176.353381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040696   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040702   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.040709   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.040714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.045262   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.050254   37715 system_pods.go:59] 17 kube-system pods found
	I1104 10:54:14.050280   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.050285   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.050289   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.050292   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.050296   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.050301   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.050305   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.050310   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.050315   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.050320   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.050327   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.050332   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.050340   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.050345   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.050354   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050364   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050370   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.050377   37715 system_pods.go:74] duration metric: took 186.169669ms to wait for pod list to return data ...
	I1104 10:54:14.050387   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:54:14.240854   37715 request.go:632] Waited for 190.370277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240922   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240929   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.240940   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.240963   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.244687   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.244932   37715 default_sa.go:45] found service account: "default"
	I1104 10:54:14.244952   37715 default_sa.go:55] duration metric: took 194.560071ms for default service account to be created ...
	I1104 10:54:14.244961   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:54:14.440692   37715 request.go:632] Waited for 195.67345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440751   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440757   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.440772   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.440780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.444830   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.449745   37715 system_pods.go:86] 17 kube-system pods found
	I1104 10:54:14.449772   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.449778   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.449783   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.449789   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.449795   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.449800   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.449807   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.449812   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.449816   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.449821   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.449826   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.449834   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.449839   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.449848   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.449857   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449870   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449878   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.449891   37715 system_pods.go:126] duration metric: took 204.923702ms to wait for k8s-apps to be running ...
	I1104 10:54:14.449903   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:54:14.449956   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:14.464950   37715 system_svc.go:56] duration metric: took 15.038755ms WaitForService to wait for kubelet
	I1104 10:54:14.464983   37715 kubeadm.go:582] duration metric: took 21.617159665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:54:14.465005   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:54:14.640444   37715 request.go:632] Waited for 175.359531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640495   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640507   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.640514   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.640531   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.644308   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.645138   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645162   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645172   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645175   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645180   37715 node_conditions.go:105] duration metric: took 180.169842ms to run NodePressure ...
	I1104 10:54:14.645191   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:54:14.645220   37715 start.go:255] writing updated cluster config ...
	I1104 10:54:14.647434   37715 out.go:201] 
	I1104 10:54:14.649030   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:14.649124   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.650881   37715 out.go:177] * Starting "ha-931571-m03" control-plane node in "ha-931571" cluster
	I1104 10:54:14.652021   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:54:14.652041   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:54:14.652128   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:54:14.652138   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:54:14.652229   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.652384   37715 start.go:360] acquireMachinesLock for ha-931571-m03: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:54:14.652421   37715 start.go:364] duration metric: took 20.345µs to acquireMachinesLock for "ha-931571-m03"
	I1104 10:54:14.652439   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:14.652552   37715 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1104 10:54:14.653932   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:54:14.654009   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:14.654042   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:14.669012   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1104 10:54:14.669516   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:14.669968   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:14.669986   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:14.670370   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:14.670550   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:14.670697   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:14.670887   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:54:14.670919   37715 client.go:168] LocalClient.Create starting
	I1104 10:54:14.670952   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:54:14.670990   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671004   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671047   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:54:14.671066   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671074   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671092   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:54:14.671100   37715 main.go:141] libmachine: (ha-931571-m03) Calling .PreCreateCheck
	I1104 10:54:14.671295   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:14.671735   37715 main.go:141] libmachine: Creating machine...
	I1104 10:54:14.671748   37715 main.go:141] libmachine: (ha-931571-m03) Calling .Create
	I1104 10:54:14.671896   37715 main.go:141] libmachine: (ha-931571-m03) Creating KVM machine...
	I1104 10:54:14.673127   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing default KVM network
	I1104 10:54:14.673275   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing private KVM network mk-ha-931571
	I1104 10:54:14.673433   37715 main.go:141] libmachine: (ha-931571-m03) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:14.673458   37715 main.go:141] libmachine: (ha-931571-m03) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:54:14.673532   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.673413   38465 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:14.673618   37715 main.go:141] libmachine: (ha-931571-m03) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:54:14.913416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.913288   38465 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa...
	I1104 10:54:15.078787   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078642   38465 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk...
	I1104 10:54:15.078832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing magic tar header
	I1104 10:54:15.078845   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing SSH key tar header
	I1104 10:54:15.078858   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078756   38465 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:15.078874   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03
	I1104 10:54:15.078881   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:54:15.078888   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 (perms=drwx------)
	I1104 10:54:15.078896   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:54:15.078902   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:54:15.078911   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:54:15.078919   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:54:15.078931   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:54:15.078951   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:15.078968   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:15.078978   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:54:15.078985   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:54:15.078991   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:54:15.078997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home
	I1104 10:54:15.079003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Skipping /home - not owner
	I1104 10:54:15.079942   37715 main.go:141] libmachine: (ha-931571-m03) define libvirt domain using xml: 
	I1104 10:54:15.079975   37715 main.go:141] libmachine: (ha-931571-m03) <domain type='kvm'>
	I1104 10:54:15.079986   37715 main.go:141] libmachine: (ha-931571-m03)   <name>ha-931571-m03</name>
	I1104 10:54:15.079997   37715 main.go:141] libmachine: (ha-931571-m03)   <memory unit='MiB'>2200</memory>
	I1104 10:54:15.080003   37715 main.go:141] libmachine: (ha-931571-m03)   <vcpu>2</vcpu>
	I1104 10:54:15.080007   37715 main.go:141] libmachine: (ha-931571-m03)   <features>
	I1104 10:54:15.080011   37715 main.go:141] libmachine: (ha-931571-m03)     <acpi/>
	I1104 10:54:15.080015   37715 main.go:141] libmachine: (ha-931571-m03)     <apic/>
	I1104 10:54:15.080020   37715 main.go:141] libmachine: (ha-931571-m03)     <pae/>
	I1104 10:54:15.080024   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080028   37715 main.go:141] libmachine: (ha-931571-m03)   </features>
	I1104 10:54:15.080032   37715 main.go:141] libmachine: (ha-931571-m03)   <cpu mode='host-passthrough'>
	I1104 10:54:15.080037   37715 main.go:141] libmachine: (ha-931571-m03)   
	I1104 10:54:15.080040   37715 main.go:141] libmachine: (ha-931571-m03)   </cpu>
	I1104 10:54:15.080045   37715 main.go:141] libmachine: (ha-931571-m03)   <os>
	I1104 10:54:15.080049   37715 main.go:141] libmachine: (ha-931571-m03)     <type>hvm</type>
	I1104 10:54:15.080054   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='cdrom'/>
	I1104 10:54:15.080061   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='hd'/>
	I1104 10:54:15.080066   37715 main.go:141] libmachine: (ha-931571-m03)     <bootmenu enable='no'/>
	I1104 10:54:15.080070   37715 main.go:141] libmachine: (ha-931571-m03)   </os>
	I1104 10:54:15.080075   37715 main.go:141] libmachine: (ha-931571-m03)   <devices>
	I1104 10:54:15.080079   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='cdrom'>
	I1104 10:54:15.080088   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/boot2docker.iso'/>
	I1104 10:54:15.080096   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hdc' bus='scsi'/>
	I1104 10:54:15.080101   37715 main.go:141] libmachine: (ha-931571-m03)       <readonly/>
	I1104 10:54:15.080106   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080111   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='disk'>
	I1104 10:54:15.080119   37715 main.go:141] libmachine: (ha-931571-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:54:15.080127   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk'/>
	I1104 10:54:15.080134   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hda' bus='virtio'/>
	I1104 10:54:15.080145   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080149   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080154   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='mk-ha-931571'/>
	I1104 10:54:15.080163   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080168   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080172   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080177   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='default'/>
	I1104 10:54:15.080181   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080186   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080191   37715 main.go:141] libmachine: (ha-931571-m03)     <serial type='pty'>
	I1104 10:54:15.080196   37715 main.go:141] libmachine: (ha-931571-m03)       <target port='0'/>
	I1104 10:54:15.080200   37715 main.go:141] libmachine: (ha-931571-m03)     </serial>
	I1104 10:54:15.080205   37715 main.go:141] libmachine: (ha-931571-m03)     <console type='pty'>
	I1104 10:54:15.080209   37715 main.go:141] libmachine: (ha-931571-m03)       <target type='serial' port='0'/>
	I1104 10:54:15.080214   37715 main.go:141] libmachine: (ha-931571-m03)     </console>
	I1104 10:54:15.080218   37715 main.go:141] libmachine: (ha-931571-m03)     <rng model='virtio'>
	I1104 10:54:15.080224   37715 main.go:141] libmachine: (ha-931571-m03)       <backend model='random'>/dev/random</backend>
	I1104 10:54:15.080230   37715 main.go:141] libmachine: (ha-931571-m03)     </rng>
	I1104 10:54:15.080236   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080243   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080248   37715 main.go:141] libmachine: (ha-931571-m03)   </devices>
	I1104 10:54:15.080254   37715 main.go:141] libmachine: (ha-931571-m03) </domain>
	I1104 10:54:15.080261   37715 main.go:141] libmachine: (ha-931571-m03) 
	I1104 10:54:15.087034   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:1d:68:f5 in network default
	I1104 10:54:15.087544   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring networks are active...
	I1104 10:54:15.087568   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:15.088354   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network default is active
	I1104 10:54:15.088653   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network mk-ha-931571 is active
	I1104 10:54:15.089053   37715 main.go:141] libmachine: (ha-931571-m03) Getting domain xml...
	I1104 10:54:15.089835   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:16.314267   37715 main.go:141] libmachine: (ha-931571-m03) Waiting to get IP...
	I1104 10:54:16.315295   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.315802   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.315837   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.315784   38465 retry.go:31] will retry after 211.49676ms: waiting for machine to come up
	I1104 10:54:16.528417   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.528897   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.528927   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.528846   38465 retry.go:31] will retry after 340.441068ms: waiting for machine to come up
	I1104 10:54:16.871525   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.871971   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.871997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.871910   38465 retry.go:31] will retry after 446.439393ms: waiting for machine to come up
	I1104 10:54:17.319543   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.320106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.320137   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.320042   38465 retry.go:31] will retry after 381.839641ms: waiting for machine to come up
	I1104 10:54:17.703288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.703811   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.703840   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.703750   38465 retry.go:31] will retry after 593.813893ms: waiting for machine to come up
	I1104 10:54:18.299510   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:18.300023   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:18.300055   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:18.299939   38465 retry.go:31] will retry after 849.789348ms: waiting for machine to come up
	I1104 10:54:19.151490   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:19.151964   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:19.151988   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:19.151922   38465 retry.go:31] will retry after 1.150337712s: waiting for machine to come up
	I1104 10:54:20.303915   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:20.304325   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:20.304357   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:20.304278   38465 retry.go:31] will retry after 1.472559033s: waiting for machine to come up
	I1104 10:54:21.778305   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:21.778784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:21.778810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:21.778723   38465 retry.go:31] will retry after 1.37004444s: waiting for machine to come up
	I1104 10:54:23.150404   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:23.150868   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:23.150895   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:23.150820   38465 retry.go:31] will retry after 1.893583796s: waiting for machine to come up
	I1104 10:54:25.045832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:25.046288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:25.046327   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:25.046279   38465 retry.go:31] will retry after 2.056345872s: waiting for machine to come up
	I1104 10:54:27.105382   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:27.105822   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:27.105853   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:27.105789   38465 retry.go:31] will retry after 3.414780128s: waiting for machine to come up
	I1104 10:54:30.521832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:30.522159   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:30.522181   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:30.522080   38465 retry.go:31] will retry after 3.340201347s: waiting for machine to come up
	I1104 10:54:33.865562   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:33.865973   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:33.866003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:33.865938   38465 retry.go:31] will retry after 5.278208954s: waiting for machine to come up
	I1104 10:54:39.149712   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150250   37715 main.go:141] libmachine: (ha-931571-m03) Found IP for machine: 192.168.39.57
	I1104 10:54:39.150283   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has current primary IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150292   37715 main.go:141] libmachine: (ha-931571-m03) Reserving static IP address...
	I1104 10:54:39.150676   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find host DHCP lease matching {name: "ha-931571-m03", mac: "52:54:00:30:f5:de", ip: "192.168.39.57"} in network mk-ha-931571
	I1104 10:54:39.223412   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Getting to WaitForSSH function...
	I1104 10:54:39.223438   37715 main.go:141] libmachine: (ha-931571-m03) Reserved static IP address: 192.168.39.57
	I1104 10:54:39.223450   37715 main.go:141] libmachine: (ha-931571-m03) Waiting for SSH to be available...
	I1104 10:54:39.226810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227204   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.227229   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH client type: external
	I1104 10:54:39.227440   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa (-rw-------)
	I1104 10:54:39.227467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:54:39.227480   37715 main.go:141] libmachine: (ha-931571-m03) DBG | About to run SSH command:
	I1104 10:54:39.227493   37715 main.go:141] libmachine: (ha-931571-m03) DBG | exit 0
	I1104 10:54:39.348849   37715 main.go:141] libmachine: (ha-931571-m03) DBG | SSH cmd err, output: <nil>: 
	I1104 10:54:39.349130   37715 main.go:141] libmachine: (ha-931571-m03) KVM machine creation complete!
	I1104 10:54:39.349458   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:39.350011   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350175   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350318   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:54:39.350330   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 10:54:39.351463   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:54:39.351478   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:54:39.351482   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:54:39.351487   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.353807   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.354143   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.354557   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354742   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354871   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.355021   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.355223   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.355234   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:54:39.452207   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.452228   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:54:39.452237   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.455314   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.455778   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.455805   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.456043   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.456250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456440   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456603   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.456750   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.456931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.456953   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:54:39.553854   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:54:39.553946   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:54:39.553963   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:54:39.553975   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554231   37715 buildroot.go:166] provisioning hostname "ha-931571-m03"
	I1104 10:54:39.554253   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554456   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.556992   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557348   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.557377   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557532   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.557736   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.557887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.558007   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.558172   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.558399   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.558418   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m03 && echo "ha-931571-m03" | sudo tee /etc/hostname
	I1104 10:54:39.670668   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m03
	
	I1104 10:54:39.670701   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.674148   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.674492   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674738   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.674887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675053   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.675459   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.675678   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.675703   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:54:39.782022   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.782049   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:54:39.782068   37715 buildroot.go:174] setting up certificates
	I1104 10:54:39.782080   37715 provision.go:84] configureAuth start
	I1104 10:54:39.782091   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.782349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:39.785051   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785459   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.785488   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785656   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.787833   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788124   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.788141   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788305   37715 provision.go:143] copyHostCerts
	I1104 10:54:39.788334   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788369   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:54:39.788378   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788442   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:54:39.788557   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788577   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:54:39.788584   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788610   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:54:39.788656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788673   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:54:39.788679   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788700   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:54:39.788771   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m03 san=[127.0.0.1 192.168.39.57 ha-931571-m03 localhost minikube]
	I1104 10:54:39.906066   37715 provision.go:177] copyRemoteCerts
	I1104 10:54:39.906121   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:54:39.906156   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.909171   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909602   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.909633   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909904   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.910114   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.910451   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.910562   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:39.986932   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:54:39.986995   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:54:40.011798   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:54:40.011899   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:54:40.035728   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:54:40.035811   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:54:40.058737   37715 provision.go:87] duration metric: took 276.643486ms to configureAuth
	I1104 10:54:40.058767   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:54:40.058982   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:40.059060   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.061592   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.061918   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.061947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.062136   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.062313   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062493   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062627   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.062779   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.062931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.062946   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:54:40.285341   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:54:40.285362   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:54:40.285369   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetURL
	I1104 10:54:40.286607   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using libvirt version 6000000
	I1104 10:54:40.288784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289099   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.289130   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289303   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:54:40.289319   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:54:40.289326   37715 client.go:171] duration metric: took 25.618399312s to LocalClient.Create
	I1104 10:54:40.289350   37715 start.go:167] duration metric: took 25.618478892s to libmachine.API.Create "ha-931571"
	I1104 10:54:40.289362   37715 start.go:293] postStartSetup for "ha-931571-m03" (driver="kvm2")
	I1104 10:54:40.289391   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:54:40.289407   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.289628   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:54:40.289653   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.291922   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292338   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.292358   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292590   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.292774   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.292922   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.293081   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.371198   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:54:40.375533   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:54:40.375563   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:54:40.375682   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:54:40.375780   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:54:40.375790   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:54:40.375871   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:54:40.385684   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:40.408674   37715 start.go:296] duration metric: took 119.284792ms for postStartSetup
	I1104 10:54:40.408723   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:40.409449   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.412211   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412561   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.412589   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412888   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:40.413122   37715 start.go:128] duration metric: took 25.760559258s to createHost
	I1104 10:54:40.413150   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.415473   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415825   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.415846   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415970   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.416207   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416371   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416538   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.416702   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.416875   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.416888   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:54:40.513907   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717680.493900775
	
	I1104 10:54:40.513930   37715 fix.go:216] guest clock: 1730717680.493900775
	I1104 10:54:40.513937   37715 fix.go:229] Guest: 2024-11-04 10:54:40.493900775 +0000 UTC Remote: 2024-11-04 10:54:40.413138421 +0000 UTC m=+139.084656658 (delta=80.762354ms)
	I1104 10:54:40.513952   37715 fix.go:200] guest clock delta is within tolerance: 80.762354ms
	I1104 10:54:40.513957   37715 start.go:83] releasing machines lock for "ha-931571-m03", held for 25.861527752s
	I1104 10:54:40.513977   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.514219   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.516861   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.517293   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.517318   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.519824   37715 out.go:177] * Found network options:
	I1104 10:54:40.521282   37715 out.go:177]   - NO_PROXY=192.168.39.67,192.168.39.245
	W1104 10:54:40.522546   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.522569   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.522586   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523386   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523502   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:54:40.523543   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	W1104 10:54:40.523621   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.523648   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.523705   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:54:40.523726   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.526526   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526600   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526878   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526907   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526933   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.527005   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527307   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527380   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527467   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527533   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.527573   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527722   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.761284   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:54:40.766951   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:54:40.767028   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:54:40.784061   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:54:40.784083   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:54:40.784139   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:54:40.799767   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:54:40.814033   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:54:40.814100   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:54:40.828095   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:54:40.843053   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:54:40.959422   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:54:41.119792   37715 docker.go:233] disabling docker service ...
	I1104 10:54:41.119859   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:54:41.134123   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:54:41.147262   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:54:41.281486   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:54:41.401330   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:54:41.415018   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:54:41.433640   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:54:41.433713   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.444506   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:54:41.444582   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.456767   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.467306   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.477809   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:54:41.488160   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.498689   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.515679   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.526763   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:54:41.536412   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:54:41.536469   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:54:41.549448   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:54:41.559807   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:41.665655   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:54:41.758091   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:54:41.758187   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:54:41.762517   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:54:41.762572   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:54:41.766429   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:54:41.804303   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:54:41.804420   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.830473   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.860302   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:54:41.861621   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:54:41.863004   37715 out.go:177]   - env NO_PROXY=192.168.39.67,192.168.39.245
	I1104 10:54:41.864263   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:41.867052   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867423   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:41.867446   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867651   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:54:41.871716   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:41.884015   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:54:41.884230   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:41.884480   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.884518   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.900117   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I1104 10:54:41.900610   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.901163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.901184   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.901516   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.901701   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:54:41.903124   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:41.903396   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.903433   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.918029   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I1104 10:54:41.918566   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.919028   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.919050   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.919333   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.919520   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:41.919673   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.57
	I1104 10:54:41.919684   37715 certs.go:194] generating shared ca certs ...
	I1104 10:54:41.919697   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:41.919810   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:54:41.919845   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:54:41.919854   37715 certs.go:256] generating profile certs ...
	I1104 10:54:41.919922   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:54:41.919946   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd
	I1104 10:54:41.919960   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 10:54:42.049039   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd ...
	I1104 10:54:42.049068   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd: {Name:mk425b204dd51c6129591dbbf4cda0b66e34eb56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049239   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd ...
	I1104 10:54:42.049250   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd: {Name:mk1230635dbd65cb8c7d025a3549f17dc35e060e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049322   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:54:42.049449   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:54:42.049564   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:54:42.049580   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:54:42.049595   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:54:42.049608   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:54:42.049621   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:54:42.049634   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:54:42.049647   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:54:42.049657   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:54:42.049669   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:54:42.049713   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:54:42.049741   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:54:42.049750   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:54:42.049771   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:54:42.049799   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:54:42.049819   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:54:42.049855   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:42.049880   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.049893   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.049905   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.049934   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:42.052637   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053074   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:42.053102   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053289   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:42.053475   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:42.053607   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:42.053769   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:42.125617   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:54:42.129901   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:54:42.141111   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:54:42.145054   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:54:42.154954   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:54:42.158822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:54:42.168976   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:54:42.172887   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:54:42.182649   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:54:42.186455   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:54:42.196466   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:54:42.200376   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:54:42.211239   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:54:42.236618   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:54:42.260726   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:54:42.283147   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:54:42.305271   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1104 10:54:42.327703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:54:42.350340   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:54:42.372114   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:54:42.394125   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:54:42.415761   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:54:42.437284   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:54:42.458545   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:54:42.474091   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:54:42.489871   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:54:42.505378   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:54:42.521116   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:54:42.537323   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:54:42.553306   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:54:42.569157   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:54:42.574422   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:54:42.584560   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588538   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588592   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.594056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:54:42.604559   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:54:42.615717   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619821   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619868   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.625153   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:54:42.638993   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:54:42.649427   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653431   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653483   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.658834   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:54:42.670960   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:54:42.675173   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:54:42.675237   37715 kubeadm.go:934] updating node {m03 192.168.39.57 8443 v1.31.2 crio true true} ...
	I1104 10:54:42.675332   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:54:42.675370   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:54:42.675419   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:54:42.692549   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:54:42.692627   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:54:42.692680   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.702705   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:54:42.702768   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.712640   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:54:42.712662   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712660   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1104 10:54:42.712682   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712648   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1104 10:54:42.712715   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712727   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712752   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:42.718694   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:54:42.718732   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:54:42.746213   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.746221   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:54:42.746258   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:54:42.746334   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.789088   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:54:42.789130   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:54:43.556894   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:54:43.566649   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1104 10:54:43.583297   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:54:43.599783   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:54:43.615935   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:54:43.619736   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:43.632102   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:43.769468   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:54:43.787176   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:43.787522   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:43.787559   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:43.803438   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I1104 10:54:43.803811   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:43.804247   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:43.804266   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:43.804582   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:43.804752   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:43.804873   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:54:43.805017   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:54:43.805035   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:43.808407   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808840   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:43.808868   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808996   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:43.809168   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:43.809326   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:43.809457   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:43.953404   37715 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:43.953450   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443"
	I1104 10:55:05.442467   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443": (21.488974658s)
	I1104 10:55:05.442503   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:55:05.990844   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m03 minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:55:06.139537   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:55:06.285616   37715 start.go:319] duration metric: took 22.480737326s to joinCluster
	I1104 10:55:06.285694   37715 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:55:06.286003   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:55:06.288554   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:55:06.289975   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:55:06.546650   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:55:06.605631   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:55:06.605981   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:55:06.606063   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:55:06.606329   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:06.606418   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:06.606434   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:06.606445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:06.606456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:06.609914   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.107514   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.107534   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.111083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.606560   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.606587   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.606600   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.606605   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.613411   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:55:08.107538   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.107560   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.107567   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.107570   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.606539   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.606559   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.606567   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.606571   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.609675   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.610356   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:09.106606   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.106630   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.106639   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.106644   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.109657   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:09.607102   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.607123   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.607131   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.607135   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.610601   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.106839   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.106861   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.106872   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.106887   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.110421   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.607151   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.607178   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.607190   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.607195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.610313   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.611052   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:11.107465   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.107489   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.107500   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.107505   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.134933   37715 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1104 10:55:11.607114   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.607137   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.607145   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.607149   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.610404   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.107512   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.107532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.606667   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.606689   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.606701   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.606705   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.609952   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.106734   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.106769   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.106780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.106786   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.110063   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.110550   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:13.607192   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.607222   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.607237   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.607241   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.610250   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:14.106526   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.106548   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.106556   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.106560   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.110076   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:14.606584   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.606604   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.606612   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.606622   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.609643   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.106797   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.106819   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.106826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.106830   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.110526   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.111303   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:15.606581   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.606631   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.606643   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.606648   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.609879   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.107000   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.107025   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.107036   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.107042   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.110279   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.607359   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.607381   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.607391   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.607398   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.610655   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.106684   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.106706   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.106716   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.106722   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.109976   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.607162   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.607182   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.607190   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.607194   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.610739   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.611443   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:18.106827   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.106850   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.106858   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.106862   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.110271   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:18.607389   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.607411   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.607419   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.607422   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.612587   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:19.106763   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.106784   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.106791   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.106795   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.110156   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:19.607506   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.607532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.607540   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.607545   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.611651   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:19.612446   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:20.107336   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.107356   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.107364   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.107368   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.110541   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:20.607455   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.607477   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.607485   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.607488   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.610742   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:21.106794   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.106815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.106823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.106827   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.109773   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:21.607002   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.607022   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.607030   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.607033   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.609863   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:22.106940   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.106962   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.106970   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.106981   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.110219   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:22.110873   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:22.607233   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.607256   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.607267   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.607272   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.610320   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.107234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.107261   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.107272   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.107278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.110559   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.607522   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.607544   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.607552   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.607557   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.610843   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.611437   37715 node_ready.go:49] node "ha-931571-m03" has status "Ready":"True"
	I1104 10:55:23.611454   37715 node_ready.go:38] duration metric: took 17.005106707s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:23.611469   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:23.611529   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:23.611538   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.611545   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.611550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.616487   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:23.623329   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.623422   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:55:23.623428   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.623436   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.623440   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.626812   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.627478   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.627500   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.627509   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.627513   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.630024   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.630705   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.630725   37715 pod_ready.go:82] duration metric: took 7.365313ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630737   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:55:23.630815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.630826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.630835   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.633089   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.633668   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.633688   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.633703   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.633714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.635922   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.636490   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.636510   37715 pod_ready.go:82] duration metric: took 5.760939ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636522   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636583   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:55:23.636592   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.636602   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.636610   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.639359   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.639900   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.639915   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.639922   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.639925   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.642474   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.642946   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.642963   37715 pod_ready.go:82] duration metric: took 6.432226ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.642971   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.643028   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:55:23.643036   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.643043   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.643047   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.645331   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.646060   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:23.646073   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.646080   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.646084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.648315   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.648847   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.648862   37715 pod_ready.go:82] duration metric: took 5.88444ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.648869   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.808246   37715 request.go:632] Waited for 159.312664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808309   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.808316   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.808320   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.811540   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.007952   37715 request.go:632] Waited for 195.768208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008033   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008045   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.008056   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.008066   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.011083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.011703   37715 pod_ready.go:93] pod "etcd-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.011724   37715 pod_ready.go:82] duration metric: took 362.848542ms for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.011739   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.207843   37715 request.go:632] Waited for 196.043868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207918   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207925   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.207937   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.207947   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.211127   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.408352   37715 request.go:632] Waited for 196.308065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408442   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408450   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.408460   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.408469   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.411644   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.412279   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.412297   37715 pod_ready.go:82] duration metric: took 400.550124ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.412310   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.608501   37715 request.go:632] Waited for 196.123497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608572   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608580   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.608590   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.608596   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.612062   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.808253   37715 request.go:632] Waited for 195.326237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808332   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808343   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.808352   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.808358   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.811435   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.811848   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.811868   37715 pod_ready.go:82] duration metric: took 399.549963ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.811877   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.008126   37715 request.go:632] Waited for 196.158524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008216   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008224   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.008232   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.008237   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.011898   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.207886   37715 request.go:632] Waited for 195.224715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207967   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207975   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.207983   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.207987   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.211174   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.211794   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.211815   37715 pod_ready.go:82] duration metric: took 399.930178ms for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.211828   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.407990   37715 request.go:632] Waited for 196.084804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408049   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408054   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.408062   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.408065   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.411212   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.608267   37715 request.go:632] Waited for 196.399136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608341   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608348   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.608358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.608363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.611599   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.612277   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.612297   37715 pod_ready.go:82] duration metric: took 400.459599ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.612307   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.808295   37715 request.go:632] Waited for 195.907201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808358   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808364   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.808371   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.808379   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.811856   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.007942   37715 request.go:632] Waited for 195.386929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008009   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008020   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.008034   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.008043   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.010794   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.011251   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.011269   37715 pod_ready.go:82] duration metric: took 398.955793ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.011279   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.207834   37715 request.go:632] Waited for 196.482261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207909   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.207934   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.207939   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.211083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.407914   37715 request.go:632] Waited for 196.093119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407994   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407999   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.408006   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.408012   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.411522   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.412011   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.412034   37715 pod_ready.go:82] duration metric: took 400.747328ms for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.412048   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.608324   37715 request.go:632] Waited for 196.200888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608407   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608414   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.608430   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.608437   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.611990   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.808246   37715 request.go:632] Waited for 195.355588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808300   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.808308   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.808311   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.811118   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.811682   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.811705   37715 pod_ready.go:82] duration metric: took 399.648214ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.811718   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.008596   37715 request.go:632] Waited for 196.775543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008677   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.008685   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.008691   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.012209   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.208175   37715 request.go:632] Waited for 195.363562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208240   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.208247   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.208250   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.211552   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.212061   37715 pod_ready.go:93] pod "kube-proxy-ttq4z" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.212084   37715 pod_ready.go:82] duration metric: took 400.357853ms for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.212098   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.408120   37715 request.go:632] Waited for 195.934645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408180   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.408188   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.408194   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.411594   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.607502   37715 request.go:632] Waited for 195.309631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607589   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607599   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.607611   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.607621   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.610707   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.611551   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.611571   37715 pod_ready.go:82] duration metric: took 399.465223ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.611584   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.807587   37715 request.go:632] Waited for 195.935372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807677   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807686   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.807694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.807697   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.810852   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.007894   37715 request.go:632] Waited for 196.377136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007943   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007948   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.007955   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.007959   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.010780   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:28.011225   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.011242   37715 pod_ready.go:82] duration metric: took 399.65101ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.011252   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.208327   37715 request.go:632] Waited for 197.007106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208398   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208406   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.208412   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.208417   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.211868   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.407823   37715 request.go:632] Waited for 195.386338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407915   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.407929   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.407936   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.411100   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.411750   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.411766   37715 pod_ready.go:82] duration metric: took 400.505326ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.411776   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.607873   37715 request.go:632] Waited for 196.030747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607978   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607989   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.607996   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.607999   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.611695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.807696   37715 request.go:632] Waited for 195.284295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807770   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807776   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.807783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.807788   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.811278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.812008   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.812025   37715 pod_ready.go:82] duration metric: took 400.242831ms for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.812037   37715 pod_ready.go:39] duration metric: took 5.200555034s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:28.812050   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:55:28.812101   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:55:28.825529   37715 api_server.go:72] duration metric: took 22.539799278s to wait for apiserver process to appear ...
	I1104 10:55:28.825558   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:55:28.825578   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:55:28.829724   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:55:28.829787   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:55:28.829795   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.829803   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.829807   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.830888   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:55:28.830964   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:55:28.830984   37715 api_server.go:131] duration metric: took 5.41894ms to wait for apiserver health ...
	I1104 10:55:28.830996   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:55:29.008134   37715 request.go:632] Waited for 177.060621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008207   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008237   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.008252   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.008298   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.014200   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.021556   37715 system_pods.go:59] 24 kube-system pods found
	I1104 10:55:29.021592   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.021600   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.021611   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.021616   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.021627   37715 system_pods.go:61] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.021633   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.021643   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.021649   37715 system_pods.go:61] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.021653   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.021658   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.021673   37715 system_pods.go:61] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.021679   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.021684   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.021689   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.021694   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.021703   37715 system_pods.go:61] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.021708   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.021714   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.021718   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.021723   37715 system_pods.go:61] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.021739   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021748   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021757   37715 system_pods.go:61] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021768   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.021776   37715 system_pods.go:74] duration metric: took 190.77233ms to wait for pod list to return data ...
	I1104 10:55:29.021785   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:55:29.207606   37715 request.go:632] Waited for 185.728415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207676   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.207686   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.207695   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.218692   37715 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1104 10:55:29.218828   37715 default_sa.go:45] found service account: "default"
	I1104 10:55:29.218847   37715 default_sa.go:55] duration metric: took 197.054864ms for default service account to be created ...
	I1104 10:55:29.218857   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:55:29.408474   37715 request.go:632] Waited for 189.535523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408534   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408539   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.408546   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.408550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.414296   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.422499   37715 system_pods.go:86] 24 kube-system pods found
	I1104 10:55:29.422532   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.422537   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.422541   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.422545   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.422549   37715 system_pods.go:89] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.422553   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.422557   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.422560   37715 system_pods.go:89] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.422563   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.422567   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.422571   37715 system_pods.go:89] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.422576   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.422582   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.422588   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.422593   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.422598   37715 system_pods.go:89] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.422604   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.422614   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.422621   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.422624   37715 system_pods.go:89] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.422633   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422642   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422650   37715 system_pods.go:89] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422656   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.422665   37715 system_pods.go:126] duration metric: took 203.801845ms to wait for k8s-apps to be running ...
	I1104 10:55:29.422676   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:55:29.422727   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:55:29.439259   37715 system_svc.go:56] duration metric: took 16.56809ms WaitForService to wait for kubelet
	I1104 10:55:29.439296   37715 kubeadm.go:582] duration metric: took 23.153569026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:55:29.439318   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:55:29.607660   37715 request.go:632] Waited for 168.244277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607713   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607718   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.607726   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.607732   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.611371   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:29.612755   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612781   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612794   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612800   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612807   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612811   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612817   37715 node_conditions.go:105] duration metric: took 173.492197ms to run NodePressure ...
	I1104 10:55:29.612832   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:55:29.612860   37715 start.go:255] writing updated cluster config ...
	I1104 10:55:29.613201   37715 ssh_runner.go:195] Run: rm -f paused
	I1104 10:55:29.662232   37715 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 10:55:29.664453   37715 out.go:177] * Done! kubectl is now configured to use "ha-931571" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.336283971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717966336262161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9299a203-cb68-48a6-922c-3c428e775451 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.336953994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b66a731f-9fec-4a75-8c4e-174d0f073b78 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.337004537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b66a731f-9fec-4a75-8c4e-174d0f073b78 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.337241474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b66a731f-9fec-4a75-8c4e-174d0f073b78 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.372597488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a3b3c67-023b-4159-b795-c2c938064248 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.372700563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a3b3c67-023b-4159-b795-c2c938064248 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.374067236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cc39e0c-cfb4-4ce9-bdbc-8b88e58887a7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.374531729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717966374508985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cc39e0c-cfb4-4ce9-bdbc-8b88e58887a7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.374995442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de76c830-3b3f-43c3-86aa-452974675ec0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.375045451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de76c830-3b3f-43c3-86aa-452974675ec0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.375282803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de76c830-3b3f-43c3-86aa-452974675ec0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.409236637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fbc69fb-666d-4a8d-a4f8-92970f4c237b name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.409312108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fbc69fb-666d-4a8d-a4f8-92970f4c237b name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.410435782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=073ead18-f63e-4237-b2cb-bf9082069f5c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.411060987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717966411037486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=073ead18-f63e-4237-b2cb-bf9082069f5c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.411461336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58da5b01-0fc2-4bfe-9286-fbc934e502df name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.411510332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58da5b01-0fc2-4bfe-9286-fbc934e502df name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.411757154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58da5b01-0fc2-4bfe-9286-fbc934e502df name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.452590758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b3ccf5a-d4a2-441c-90a5-95ebdc5846ed name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.452692677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b3ccf5a-d4a2-441c-90a5-95ebdc5846ed name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.453624917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cbcc330-055f-489b-b28b-a36202de385c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.454201513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717966454176247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cbcc330-055f-489b-b28b-a36202de385c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.454876630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37e689e3-f79d-43df-a1d1-cb7866fc4ecc name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.454945555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37e689e3-f79d-43df-a1d1-cb7866fc4ecc name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:26 ha-931571 crio[659]: time="2024-11-04 10:59:26.455161874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37e689e3-f79d-43df-a1d1-cb7866fc4ecc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	801830521b8c6       77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488                                      32 seconds ago      Exited              kube-vip                  7                   c376c65bb2b6b       kube-vip-ha-931571
	ecc02a44b9547       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ca422d1f835b4       busybox-7dff88458-nslmz
	400aa38b53356       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   c6e22705ccc18       coredns-7c65d6cfc9-s9wb4
	49e75724c5ead       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   bcbca8745afa7       coredns-7c65d6cfc9-5ss4v
	f8efbd7a72ea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b15baa796a09e       storage-provisioner
	4401315f385bf       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   220337aaf496c       kindnet-2n2ws
	6e592fe17c5f7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   88e06a89dd6f2       kube-proxy-bvk6r
	e50ab0290e7c2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   b36f0d25b985a       kube-scheduler-ha-931571
	4572c8bcb28cd       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   9659e6073c7ae       kube-controller-manager-ha-931571
	82e4be064be10       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d779a632ccdca       kube-apiserver-ha-931571
	f2d32daf142ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   76529e2f353a6       etcd-ha-931571
	
	
	==> coredns [400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457] <==
	[INFO] 10.244.0.4:50237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150549s
	[INFO] 10.244.0.4:46253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001843568s
	[INFO] 10.244.0.4:55713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184256s
	[INFO] 10.244.0.4:40615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215052s
	[INFO] 10.244.0.4:48280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078576s
	[INFO] 10.244.0.4:54787 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130955s
	[INFO] 10.244.1.2:58741 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002139116s
	[INFO] 10.244.1.2:37960 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110836s
	[INFO] 10.244.1.2:58623 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109212s
	[INFO] 10.244.1.2:51618 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00158249s
	[INFO] 10.244.1.2:43015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087484s
	[INFO] 10.244.1.2:39492 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171988s
	[INFO] 10.244.2.2:48038 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132123s
	[INFO] 10.244.0.4:35814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180509s
	[INFO] 10.244.0.4:60410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089999s
	[INFO] 10.244.0.4:47053 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039998s
	[INFO] 10.244.1.2:58250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164547s
	[INFO] 10.244.1.2:52533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169574s
	[INFO] 10.244.2.2:44494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181065s
	[INFO] 10.244.2.2:58013 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023451s
	[INFO] 10.244.2.2:52479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131262s
	[INFO] 10.244.0.4:40569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209971s
	[INFO] 10.244.0.4:39524 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112991s
	[INFO] 10.244.0.4:47233 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143713s
	[INFO] 10.244.1.2:40992 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169174s
	
	
	==> coredns [49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48964 - 23647 "HINFO IN 8987446281611230695.8255749056578627230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085188681s
	[INFO] 10.244.2.2:34961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003596703s
	[INFO] 10.244.0.4:37004 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010865s
	[INFO] 10.244.0.4:53184 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001905017s
	[INFO] 10.244.1.2:58428 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083838s
	[INFO] 10.244.1.2:60855 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001943834s
	[INFO] 10.244.2.2:42530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210297s
	[INFO] 10.244.2.2:45691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000254098s
	[INFO] 10.244.2.2:54453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116752s
	[INFO] 10.244.0.4:49389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000239128s
	[INFO] 10.244.0.4:50445 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078508s
	[INFO] 10.244.1.2:33136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123784s
	[INFO] 10.244.1.2:60974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079916s
	[INFO] 10.244.2.2:49080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171041s
	[INFO] 10.244.2.2:43340 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142924s
	[INFO] 10.244.2.2:43789 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094712s
	[INFO] 10.244.0.4:32943 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072704s
	[INFO] 10.244.1.2:50464 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118885s
	[INFO] 10.244.1.2:36951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148048s
	[INFO] 10.244.2.2:50644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135678s
	[INFO] 10.244.0.4:38496 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001483s
	[INFO] 10.244.1.2:59424 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211313s
	[INFO] 10.244.1.2:33660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134208s
	[INFO] 10.244.1.2:34489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138513s
	
	
	==> describe nodes <==
	Name:               ha-931571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:53:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-931571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5397aa0c862f4705b75b9757490651ea
	  System UUID:                5397aa0c-862f-4705-b75b-9757490651ea
	  Boot ID:                    17751c92-c71f-4e82-afb4-12da82035155
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nslmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 coredns-7c65d6cfc9-5ss4v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-s9wb4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-931571                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m28s
	  kube-system                 kindnet-2n2ws                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-931571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-controller-manager-ha-931571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-proxy-bvk6r                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-931571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-vip-ha-931571                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m22s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s  kubelet          Node ha-931571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s  kubelet          Node ha-931571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s  kubelet          Node ha-931571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  NodeReady                6m8s   kubelet          Node ha-931571 status is now: NodeReady
	  Normal  RegisteredNode           5m29s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  RegisteredNode           4m15s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	
	
	Name:               ha-931571-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:53:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:56:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-931571-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06772ff96588423e9dc77ed49845e534
	  System UUID:                06772ff9-6588-423e-9dc7-7ed49845e534
	  Boot ID:                    74d940a3-5941-40ed-b058-45da0bd2f171
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9wmp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-931571-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m35s
	  kube-system                 kindnet-bg4z6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m37s
	  kube-system                 kube-apiserver-ha-931571-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-ha-931571-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-proxy-wz92s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-scheduler-ha-931571-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-vip-ha-931571-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m33s                  kube-proxy       
	  Normal  Starting                 5m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node ha-931571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  NodeNotReady             2m5s                   node-controller  Node ha-931571-m02 status is now: NodeNotReady
	
	
	Name:               ha-931571-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:55:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-931571-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b21e133cd17b4b699323cc6d9f47f565
	  System UUID:                b21e133c-d17b-4b69-9323-cc6d9f47f565
	  Boot ID:                    50ec73f3-3253-4df5-83ed-277786faa385
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lqgb9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-931571-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kindnet-w2jwt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m23s
	  kube-system                 kube-apiserver-ha-931571-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-ha-931571-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-proxy-ttq4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-ha-931571-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-vip-ha-931571-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m23s                  cidrAllocator    Node ha-931571-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m24s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m24s)  kubelet          Node ha-931571-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m24s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	
	
	Name:               ha-931571-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-931571-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 851b57db90dc4e65909090eed2536ea8
	  System UUID:                851b57db-90dc-4e65-9090-90eed2536ea8
	  Boot ID:                    be99e848-d7b5-4c3a-990d-5dd7890c841c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x8ptv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-s8gg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m20s                  cidrAllocator    Node ha-931571-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m20s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m20s)  kubelet          Node ha-931571-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m20s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-931571-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 4 10:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047726] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036586] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779631] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.763191] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.537421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.904587] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.060497] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062176] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.155966] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.126824] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.243725] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.719760] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.831679] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.057052] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249250] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.693317] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[Nov 4 10:53] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.046787] kauditd_printk_skb: 41 callbacks suppressed
	[ +27.005860] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c] <==
	{"level":"warn","ts":"2024-11-04T10:59:26.685981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.694325Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.698218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.707803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.713556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.718921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.722979Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.723394Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.726349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.734810Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.739860Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.745449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.748622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.751146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.756036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.762910Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.764841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.766167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.769649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.774300Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.777792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.781698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.789825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.794963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:26.823246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:59:26 up 7 min,  0 users,  load average: 0.20, 0.30, 0.15
	Linux ha-931571 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0] <==
	I1104 10:58:47.933532       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:58:57.931888       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:58:57.931969       1 main.go:301] handling current node
	I1104 10:58:57.931997       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:58:57.932015       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:58:57.932703       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:58:57.932784       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:58:57.933003       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:58:57.933029       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.925895       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:07.925959       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:07.926150       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:07.926172       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.926258       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:07.926276       1 main.go:301] handling current node
	I1104 10:59:07.926287       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:07.926292       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:59:17.932116       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:17.932223       1 main.go:301] handling current node
	I1104 10:59:17.932253       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:17.932271       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:59:17.932486       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:17.932519       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:17.932614       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:17.932635       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150] <==
	I1104 10:52:57.529011       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1104 10:52:57.636067       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1104 10:52:58.624832       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1104 10:52:58.639937       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1104 10:52:58.805171       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1104 10:53:03.087294       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1104 10:53:03.287753       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1104 10:53:50.685836       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="2a13690c-2b7c-4af7-94a1-2fcd1065da04"
	E1104 10:53:50.685933       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.903µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1104 10:55:34.753652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57932: use of closed network connection
	E1104 10:55:34.925834       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57948: use of closed network connection
	E1104 10:55:35.093653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57972: use of closed network connection
	E1104 10:55:35.274875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57992: use of closed network connection
	E1104 10:55:35.447438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58008: use of closed network connection
	E1104 10:55:35.612882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58018: use of closed network connection
	E1104 10:55:35.778454       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58044: use of closed network connection
	E1104 10:55:35.949313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58070: use of closed network connection
	E1104 10:55:36.116046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58086: use of closed network connection
	E1104 10:55:36.394559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58120: use of closed network connection
	E1104 10:55:36.560067       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58130: use of closed network connection
	E1104 10:55:36.741903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58146: use of closed network connection
	E1104 10:55:36.920290       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58160: use of closed network connection
	E1104 10:55:37.097281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58172: use of closed network connection
	E1104 10:55:37.276505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58204: use of closed network connection
	W1104 10:57:07.528371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.57 192.168.39.67]
	
	
	==> kube-controller-manager [4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc] <==
	I1104 10:56:02.327738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571"
	I1104 10:56:04.592818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m03"
	I1104 10:56:06.541409       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-931571-m04\" does not exist"
	I1104 10:56:06.575948       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-931571-m04" podCIDRs=["10.244.3.0/24"]
	I1104 10:56:06.576008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.576040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.730053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.090693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.683331       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-931571-m04"
	I1104 10:56:07.724925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.198433       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.234463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:16.862581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:56:26.200074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.386370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:36.943150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:57:21.411213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:57:21.411471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.433152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.545878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.838445ms"
	I1104 10:57:21.546123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.292µs"
	I1104 10:57:22.718407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:26.623482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	
	
	==> kube-proxy [6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 10:53:04.203851       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 10:53:04.229581       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E1104 10:53:04.229781       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 10:53:04.282192       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 10:53:04.282221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 10:53:04.282244       1 server_linux.go:169] "Using iptables Proxier"
	I1104 10:53:04.285593       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 10:53:04.285958       1 server.go:483] "Version info" version="v1.31.2"
	I1104 10:53:04.285985       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 10:53:04.288139       1 config.go:199] "Starting service config controller"
	I1104 10:53:04.288173       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 10:53:04.290392       1 config.go:105] "Starting endpoint slice config controller"
	I1104 10:53:04.290557       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 10:53:04.291547       1 config.go:328] "Starting node config controller"
	I1104 10:53:04.292932       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 10:53:04.389214       1 shared_informer.go:320] Caches are synced for service config
	I1104 10:53:04.391802       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 10:53:04.393273       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c] <==
	W1104 10:52:57.001881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1104 10:52:57.001927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.141748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1104 10:52:57.141796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.201248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1104 10:52:57.201310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1104 10:52:58.585064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 10:55:30.513828       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="641f6861-b035-49a8-832b-70b7a069afb3" pod="default/busybox-7dff88458-lqgb9" assumedNode="ha-931571-m03" currentNode="ha-931571-m02"
	E1104 10:55:30.530615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m02"
	E1104 10:55:30.530773       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 641f6861-b035-49a8-832b-70b7a069afb3(default/busybox-7dff88458-lqgb9) was assumed on ha-931571-m02 but assigned to ha-931571-m03" pod="default/busybox-7dff88458-lqgb9"
	E1104 10:55:30.530821       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" pod="default/busybox-7dff88458-lqgb9"
	I1104 10:55:30.530854       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m03"
	E1104 10:55:30.571464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572521       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 68017266-8187-488d-ab36-2a5af294fa2e(default/busybox-7dff88458-nslmz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nslmz"
	E1104 10:55:30.572641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" pod="default/busybox-7dff88458-nslmz"
	I1104 10:55:30.572740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572411       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.573133       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 84b6e653-b685-4c00-ac2f-d650738a613b(default/busybox-7dff88458-w9wmp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9wmp"
	E1104 10:55:30.573206       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" pod="default/busybox-7dff88458-w9wmp"
	I1104 10:55:30.573228       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.792999       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-5nt9m\" not found" pod="default/busybox-7dff88458-5nt9m"
	E1104 10:56:06.602004       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	E1104 10:56:06.602261       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c786786d-b4b5-4479-b5df-24cc8f346e86(kube-system/kube-proxy-s8gg7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s8gg7"
	E1104 10:56:06.602358       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" pod="kube-system/kube-proxy-s8gg7"
	I1104 10:56:06.602540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	
	
	==> kubelet <==
	Nov 04 10:58:30 ha-931571 kubelet[1360]: I1104 10:58:30.785581    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:30 ha-931571 kubelet[1360]: E1104 10:58:30.785757    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:58:38 ha-931571 kubelet[1360]: E1104 10:58:38.871501    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717918871014143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:38 ha-931571 kubelet[1360]: E1104 10:58:38.871524    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717918871014143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:42 ha-931571 kubelet[1360]: I1104 10:58:42.786581    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:42 ha-931571 kubelet[1360]: E1104 10:58:42.791316    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872774    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872859    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:53 ha-931571 kubelet[1360]: I1104 10:58:53.785072    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.819237    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874071    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874093    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.144622    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.145089    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: E1104 10:59:00.145270    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878363    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878627    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: I1104 10:59:14.786026    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: E1104 10:59:14.786168    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:18 ha-931571 kubelet[1360]: E1104 10:59:18.881691    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717958881254516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:18 ha-931571 kubelet[1360]: E1104 10:59:18.881729    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717958881254516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:261: (dbg) Run:  kubectl --context ha-931571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.294933632s)
ha_test.go:309: expected profile "ha-931571" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-931571\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-931571\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-931571\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.67\",\"Port\":8443,\"Kubernet
esVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.245\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.57\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.237\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"
MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.388151887s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m03_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-931571 node start m02 -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:52:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:52:21.364935   37715 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:52:21.365025   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365032   37715 out.go:358] Setting ErrFile to fd 2...
	I1104 10:52:21.365036   37715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:52:21.365213   37715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:52:21.365784   37715 out.go:352] Setting JSON to false
	I1104 10:52:21.366601   37715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5692,"bootTime":1730711849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:52:21.366686   37715 start.go:139] virtualization: kvm guest
	I1104 10:52:21.368805   37715 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:52:21.370048   37715 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:52:21.370105   37715 notify.go:220] Checking for updates...
	I1104 10:52:21.372521   37715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:52:21.373968   37715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:52:21.375378   37715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.376837   37715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:52:21.378230   37715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:52:21.379614   37715 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:52:21.414672   37715 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 10:52:21.416078   37715 start.go:297] selected driver: kvm2
	I1104 10:52:21.416092   37715 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:52:21.416103   37715 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:52:21.416883   37715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.416970   37715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:52:21.432886   37715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:52:21.432946   37715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:52:21.433171   37715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:52:21.433208   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:21.433267   37715 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1104 10:52:21.433278   37715 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1104 10:52:21.433324   37715 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:21.433412   37715 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:52:21.435216   37715 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 10:52:21.436574   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:21.436609   37715 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:52:21.436618   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:52:21.436693   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:52:21.436705   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:52:21.436992   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:21.437018   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json: {Name:mke118782614f4d89fa0f6507dfdc64c536a0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:21.437163   37715 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:52:21.437221   37715 start.go:364] duration metric: took 42.218µs to acquireMachinesLock for "ha-931571"
	I1104 10:52:21.437267   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:52:21.437337   37715 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 10:52:21.438936   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:52:21.439063   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:52:21.439107   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:52:21.453699   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1104 10:52:21.454132   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:52:21.454653   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:52:21.454675   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:52:21.455002   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:52:21.455150   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:21.455275   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:21.455438   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:52:21.455470   37715 client.go:168] LocalClient.Create starting
	I1104 10:52:21.455500   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:52:21.455528   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455541   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455581   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:52:21.455599   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:52:21.455610   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:52:21.455624   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:52:21.455633   37715 main.go:141] libmachine: (ha-931571) Calling .PreCreateCheck
	I1104 10:52:21.455911   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:21.456291   37715 main.go:141] libmachine: Creating machine...
	I1104 10:52:21.456304   37715 main.go:141] libmachine: (ha-931571) Calling .Create
	I1104 10:52:21.456440   37715 main.go:141] libmachine: (ha-931571) Creating KVM machine...
	I1104 10:52:21.457741   37715 main.go:141] libmachine: (ha-931571) DBG | found existing default KVM network
	I1104 10:52:21.458392   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.458262   37738 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1104 10:52:21.458442   37715 main.go:141] libmachine: (ha-931571) DBG | created network xml: 
	I1104 10:52:21.458465   37715 main.go:141] libmachine: (ha-931571) DBG | <network>
	I1104 10:52:21.458474   37715 main.go:141] libmachine: (ha-931571) DBG |   <name>mk-ha-931571</name>
	I1104 10:52:21.458487   37715 main.go:141] libmachine: (ha-931571) DBG |   <dns enable='no'/>
	I1104 10:52:21.458498   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458510   37715 main.go:141] libmachine: (ha-931571) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1104 10:52:21.458517   37715 main.go:141] libmachine: (ha-931571) DBG |     <dhcp>
	I1104 10:52:21.458526   37715 main.go:141] libmachine: (ha-931571) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1104 10:52:21.458536   37715 main.go:141] libmachine: (ha-931571) DBG |     </dhcp>
	I1104 10:52:21.458547   37715 main.go:141] libmachine: (ha-931571) DBG |   </ip>
	I1104 10:52:21.458556   37715 main.go:141] libmachine: (ha-931571) DBG |   
	I1104 10:52:21.458566   37715 main.go:141] libmachine: (ha-931571) DBG | </network>
	I1104 10:52:21.458577   37715 main.go:141] libmachine: (ha-931571) DBG | 
	I1104 10:52:21.463306   37715 main.go:141] libmachine: (ha-931571) DBG | trying to create private KVM network mk-ha-931571 192.168.39.0/24...
	I1104 10:52:21.529269   37715 main.go:141] libmachine: (ha-931571) DBG | private KVM network mk-ha-931571 192.168.39.0/24 created
	I1104 10:52:21.529311   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.529188   37738 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.529329   37715 main.go:141] libmachine: (ha-931571) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.529347   37715 main.go:141] libmachine: (ha-931571) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:52:21.529364   37715 main.go:141] libmachine: (ha-931571) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:52:21.775859   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.775727   37738 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa...
	I1104 10:52:21.860057   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.859924   37738 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk...
	I1104 10:52:21.860086   37715 main.go:141] libmachine: (ha-931571) DBG | Writing magic tar header
	I1104 10:52:21.860102   37715 main.go:141] libmachine: (ha-931571) DBG | Writing SSH key tar header
	I1104 10:52:21.860115   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:21.860035   37738 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 ...
	I1104 10:52:21.860131   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571
	I1104 10:52:21.860191   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:52:21.860213   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:52:21.860225   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571 (perms=drwx------)
	I1104 10:52:21.860235   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:52:21.860254   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:52:21.860267   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:52:21.860276   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:52:21.860287   37715 main.go:141] libmachine: (ha-931571) DBG | Checking permissions on dir: /home
	I1104 10:52:21.860298   37715 main.go:141] libmachine: (ha-931571) DBG | Skipping /home - not owner
	I1104 10:52:21.860370   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:52:21.860424   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:52:21.860440   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:52:21.860450   37715 main.go:141] libmachine: (ha-931571) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:52:21.860468   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:21.861289   37715 main.go:141] libmachine: (ha-931571) define libvirt domain using xml: 
	I1104 10:52:21.861306   37715 main.go:141] libmachine: (ha-931571) <domain type='kvm'>
	I1104 10:52:21.861313   37715 main.go:141] libmachine: (ha-931571)   <name>ha-931571</name>
	I1104 10:52:21.861320   37715 main.go:141] libmachine: (ha-931571)   <memory unit='MiB'>2200</memory>
	I1104 10:52:21.861328   37715 main.go:141] libmachine: (ha-931571)   <vcpu>2</vcpu>
	I1104 10:52:21.861340   37715 main.go:141] libmachine: (ha-931571)   <features>
	I1104 10:52:21.861356   37715 main.go:141] libmachine: (ha-931571)     <acpi/>
	I1104 10:52:21.861372   37715 main.go:141] libmachine: (ha-931571)     <apic/>
	I1104 10:52:21.861380   37715 main.go:141] libmachine: (ha-931571)     <pae/>
	I1104 10:52:21.861396   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861404   37715 main.go:141] libmachine: (ha-931571)   </features>
	I1104 10:52:21.861416   37715 main.go:141] libmachine: (ha-931571)   <cpu mode='host-passthrough'>
	I1104 10:52:21.861423   37715 main.go:141] libmachine: (ha-931571)   
	I1104 10:52:21.861426   37715 main.go:141] libmachine: (ha-931571)   </cpu>
	I1104 10:52:21.861433   37715 main.go:141] libmachine: (ha-931571)   <os>
	I1104 10:52:21.861437   37715 main.go:141] libmachine: (ha-931571)     <type>hvm</type>
	I1104 10:52:21.861444   37715 main.go:141] libmachine: (ha-931571)     <boot dev='cdrom'/>
	I1104 10:52:21.861448   37715 main.go:141] libmachine: (ha-931571)     <boot dev='hd'/>
	I1104 10:52:21.861452   37715 main.go:141] libmachine: (ha-931571)     <bootmenu enable='no'/>
	I1104 10:52:21.861458   37715 main.go:141] libmachine: (ha-931571)   </os>
	I1104 10:52:21.861462   37715 main.go:141] libmachine: (ha-931571)   <devices>
	I1104 10:52:21.861469   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='cdrom'>
	I1104 10:52:21.861476   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/boot2docker.iso'/>
	I1104 10:52:21.861488   37715 main.go:141] libmachine: (ha-931571)       <target dev='hdc' bus='scsi'/>
	I1104 10:52:21.861492   37715 main.go:141] libmachine: (ha-931571)       <readonly/>
	I1104 10:52:21.861495   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861500   37715 main.go:141] libmachine: (ha-931571)     <disk type='file' device='disk'>
	I1104 10:52:21.861506   37715 main.go:141] libmachine: (ha-931571)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:52:21.861513   37715 main.go:141] libmachine: (ha-931571)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/ha-931571.rawdisk'/>
	I1104 10:52:21.861520   37715 main.go:141] libmachine: (ha-931571)       <target dev='hda' bus='virtio'/>
	I1104 10:52:21.861524   37715 main.go:141] libmachine: (ha-931571)     </disk>
	I1104 10:52:21.861533   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861538   37715 main.go:141] libmachine: (ha-931571)       <source network='mk-ha-931571'/>
	I1104 10:52:21.861547   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861557   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861566   37715 main.go:141] libmachine: (ha-931571)     <interface type='network'>
	I1104 10:52:21.861571   37715 main.go:141] libmachine: (ha-931571)       <source network='default'/>
	I1104 10:52:21.861580   37715 main.go:141] libmachine: (ha-931571)       <model type='virtio'/>
	I1104 10:52:21.861584   37715 main.go:141] libmachine: (ha-931571)     </interface>
	I1104 10:52:21.861591   37715 main.go:141] libmachine: (ha-931571)     <serial type='pty'>
	I1104 10:52:21.861645   37715 main.go:141] libmachine: (ha-931571)       <target port='0'/>
	I1104 10:52:21.861685   37715 main.go:141] libmachine: (ha-931571)     </serial>
	I1104 10:52:21.861703   37715 main.go:141] libmachine: (ha-931571)     <console type='pty'>
	I1104 10:52:21.861714   37715 main.go:141] libmachine: (ha-931571)       <target type='serial' port='0'/>
	I1104 10:52:21.861735   37715 main.go:141] libmachine: (ha-931571)     </console>
	I1104 10:52:21.861744   37715 main.go:141] libmachine: (ha-931571)     <rng model='virtio'>
	I1104 10:52:21.861753   37715 main.go:141] libmachine: (ha-931571)       <backend model='random'>/dev/random</backend>
	I1104 10:52:21.861765   37715 main.go:141] libmachine: (ha-931571)     </rng>
	I1104 10:52:21.861773   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861783   37715 main.go:141] libmachine: (ha-931571)     
	I1104 10:52:21.861791   37715 main.go:141] libmachine: (ha-931571)   </devices>
	I1104 10:52:21.861799   37715 main.go:141] libmachine: (ha-931571) </domain>
	I1104 10:52:21.861809   37715 main.go:141] libmachine: (ha-931571) 
	I1104 10:52:21.865935   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:cf:c5:1d in network default
	I1104 10:52:21.866504   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:21.866522   37715 main.go:141] libmachine: (ha-931571) Ensuring networks are active...
	I1104 10:52:21.866948   37715 main.go:141] libmachine: (ha-931571) Ensuring network default is active
	I1104 10:52:21.867232   37715 main.go:141] libmachine: (ha-931571) Ensuring network mk-ha-931571 is active
	I1104 10:52:21.867627   37715 main.go:141] libmachine: (ha-931571) Getting domain xml...
	I1104 10:52:21.868256   37715 main.go:141] libmachine: (ha-931571) Creating domain...
	I1104 10:52:23.049161   37715 main.go:141] libmachine: (ha-931571) Waiting to get IP...
	I1104 10:52:23.050233   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.050623   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.050643   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.050602   37738 retry.go:31] will retry after 245.530574ms: waiting for machine to come up
	I1104 10:52:23.298185   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.298678   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.298704   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.298589   37738 retry.go:31] will retry after 317.376406ms: waiting for machine to come up
	I1104 10:52:23.617020   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.617577   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.617605   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.617514   37738 retry.go:31] will retry after 370.038267ms: waiting for machine to come up
	I1104 10:52:23.988831   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:23.989190   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:23.989220   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:23.989148   37738 retry.go:31] will retry after 538.152632ms: waiting for machine to come up
	I1104 10:52:24.528804   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:24.529210   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:24.529252   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:24.529162   37738 retry.go:31] will retry after 731.07349ms: waiting for machine to come up
	I1104 10:52:25.262048   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:25.262502   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:25.262519   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:25.262462   37738 retry.go:31] will retry after 741.011273ms: waiting for machine to come up
	I1104 10:52:26.005553   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.005942   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.005976   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.005909   37738 retry.go:31] will retry after 743.777795ms: waiting for machine to come up
	I1104 10:52:26.751254   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:26.751560   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:26.751581   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:26.751519   37738 retry.go:31] will retry after 895.955115ms: waiting for machine to come up
	I1104 10:52:27.648705   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:27.649070   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:27.649096   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:27.649040   37738 retry.go:31] will retry after 1.225419017s: waiting for machine to come up
	I1104 10:52:28.876413   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:28.876806   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:28.876829   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:28.876782   37738 retry.go:31] will retry after 1.631823926s: waiting for machine to come up
	I1104 10:52:30.510636   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:30.511147   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:30.511177   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:30.511093   37738 retry.go:31] will retry after 1.798258408s: waiting for machine to come up
	I1104 10:52:32.311067   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:32.311528   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:32.311574   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:32.311491   37738 retry.go:31] will retry after 3.573429436s: waiting for machine to come up
	I1104 10:52:35.889088   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:35.889552   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find current IP address of domain ha-931571 in network mk-ha-931571
	I1104 10:52:35.889578   37715 main.go:141] libmachine: (ha-931571) DBG | I1104 10:52:35.889516   37738 retry.go:31] will retry after 4.488251667s: waiting for machine to come up
	I1104 10:52:40.382173   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382599   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has current primary IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.382621   37715 main.go:141] libmachine: (ha-931571) Found IP for machine: 192.168.39.67
	I1104 10:52:40.382633   37715 main.go:141] libmachine: (ha-931571) Reserving static IP address...
	I1104 10:52:40.383033   37715 main.go:141] libmachine: (ha-931571) DBG | unable to find host DHCP lease matching {name: "ha-931571", mac: "52:54:00:2c:cb:16", ip: "192.168.39.67"} in network mk-ha-931571
	I1104 10:52:40.452346   37715 main.go:141] libmachine: (ha-931571) DBG | Getting to WaitForSSH function...
	I1104 10:52:40.452379   37715 main.go:141] libmachine: (ha-931571) Reserved static IP address: 192.168.39.67
	I1104 10:52:40.452392   37715 main.go:141] libmachine: (ha-931571) Waiting for SSH to be available...
	I1104 10:52:40.456018   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456490   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.456515   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.456627   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH client type: external
	I1104 10:52:40.456650   37715 main.go:141] libmachine: (ha-931571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa (-rw-------)
	I1104 10:52:40.456681   37715 main.go:141] libmachine: (ha-931571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:52:40.456700   37715 main.go:141] libmachine: (ha-931571) DBG | About to run SSH command:
	I1104 10:52:40.456715   37715 main.go:141] libmachine: (ha-931571) DBG | exit 0
	I1104 10:52:40.580862   37715 main.go:141] libmachine: (ha-931571) DBG | SSH cmd err, output: <nil>: 
	I1104 10:52:40.581146   37715 main.go:141] libmachine: (ha-931571) KVM machine creation complete!
	I1104 10:52:40.581410   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:40.581936   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582130   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:40.582294   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:52:40.582307   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:52:40.583398   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:52:40.583412   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:52:40.583418   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:52:40.583425   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.585558   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585865   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.585891   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.585991   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.586130   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586272   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.586383   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.586519   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.586723   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.586734   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:52:40.692229   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:40.692248   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:52:40.692257   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.695010   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695388   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.695411   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.695556   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.695751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.695899   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.696052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.696188   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.696868   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.696890   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:52:40.801468   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:52:40.801552   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:52:40.801563   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:52:40.801571   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801814   37715 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 10:52:40.801836   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:40.801992   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.804318   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804694   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.804723   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.804889   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.805051   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805262   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.805439   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.805644   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.805826   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.805838   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 10:52:40.921516   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 10:52:40.921540   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:40.924174   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:40.924541   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:40.924675   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:40.924825   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.924941   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:40.925052   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:40.925210   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:40.925423   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:40.925448   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:52:41.036770   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:52:41.036799   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:52:41.036830   37715 buildroot.go:174] setting up certificates
	I1104 10:52:41.036839   37715 provision.go:84] configureAuth start
	I1104 10:52:41.036848   37715 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 10:52:41.037164   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.039662   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.040032   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.040164   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.042288   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042624   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.042652   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.042756   37715 provision.go:143] copyHostCerts
	I1104 10:52:41.042779   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042808   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:52:41.042823   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:52:41.042880   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:52:41.042955   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.042972   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:52:41.042979   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:52:41.043001   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:52:41.043042   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043058   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:52:41.043064   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:52:41.043084   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:52:41.043133   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 10:52:41.275942   37715 provision.go:177] copyRemoteCerts
	I1104 10:52:41.275998   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:52:41.276018   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.278984   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279300   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.279324   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.279438   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.279611   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.279754   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.279862   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.362606   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:52:41.362673   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:52:41.384103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:52:41.384170   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 10:52:41.405170   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:52:41.405259   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:52:41.426285   37715 provision.go:87] duration metric: took 389.43394ms to configureAuth
	I1104 10:52:41.426311   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:52:41.426499   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:52:41.426580   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.429219   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429514   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.429539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.429751   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.429959   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430107   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.430247   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.430417   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.430644   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.430666   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:52:41.649262   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:52:41.649291   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:52:41.649300   37715 main.go:141] libmachine: (ha-931571) Calling .GetURL
	I1104 10:52:41.650723   37715 main.go:141] libmachine: (ha-931571) DBG | Using libvirt version 6000000
	I1104 10:52:41.653499   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.653913   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.653943   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.654070   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:52:41.654084   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:52:41.654091   37715 client.go:171] duration metric: took 20.198612513s to LocalClient.Create
	I1104 10:52:41.654124   37715 start.go:167] duration metric: took 20.198697894s to libmachine.API.Create "ha-931571"
	I1104 10:52:41.654168   37715 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 10:52:41.654182   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:52:41.654199   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.654448   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:52:41.654477   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.656689   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657007   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.657028   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.657279   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.657484   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.657648   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.657776   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.738934   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:52:41.742902   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:52:41.742925   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:52:41.742997   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:52:41.743084   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:52:41.743095   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:52:41.743212   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:52:41.752124   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:41.774335   37715 start.go:296] duration metric: took 120.149038ms for postStartSetup
	I1104 10:52:41.774411   37715 main.go:141] libmachine: (ha-931571) Calling .GetConfigRaw
	I1104 10:52:41.775008   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.777422   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.777754   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.777776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.778012   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:52:41.778186   37715 start.go:128] duration metric: took 20.340838176s to createHost
	I1104 10:52:41.778221   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.780525   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780784   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.780805   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.780933   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.781101   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781264   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.781386   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.781512   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:52:41.781672   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 10:52:41.781683   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:52:41.885593   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717561.859087710
	
	I1104 10:52:41.885616   37715 fix.go:216] guest clock: 1730717561.859087710
	I1104 10:52:41.885624   37715 fix.go:229] Guest: 2024-11-04 10:52:41.85908771 +0000 UTC Remote: 2024-11-04 10:52:41.778208592 +0000 UTC m=+20.449726833 (delta=80.879118ms)
	I1104 10:52:41.885647   37715 fix.go:200] guest clock delta is within tolerance: 80.879118ms
	I1104 10:52:41.885653   37715 start.go:83] releasing machines lock for "ha-931571", held for 20.448400301s
	I1104 10:52:41.885675   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.885953   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:41.888489   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.888887   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.888909   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.889131   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889647   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889819   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:52:41.889899   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:52:41.889945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.890032   37715 ssh_runner.go:195] Run: cat /version.json
	I1104 10:52:41.890047   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:52:41.892621   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893038   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893065   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893082   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893208   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893350   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.893498   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.893582   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:41.893589   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.893613   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:41.893793   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:52:41.893936   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:52:41.894105   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:52:41.894263   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:52:41.988130   37715 ssh_runner.go:195] Run: systemctl --version
	I1104 10:52:41.993656   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:52:42.142615   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:52:42.148950   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:52:42.149023   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:52:42.163368   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:52:42.163399   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:52:42.163459   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:52:42.178011   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:52:42.190311   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:52:42.190363   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:52:42.202494   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:52:42.215234   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:52:42.322933   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:52:42.465367   37715 docker.go:233] disabling docker service ...
	I1104 10:52:42.465435   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:52:42.478799   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:52:42.490748   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:52:42.621810   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:52:42.721588   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:52:42.734181   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:52:42.750278   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:52:42.750346   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.759509   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:52:42.759569   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.768912   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.778275   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.791011   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:52:42.801155   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.810365   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.825204   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:52:42.834333   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:52:42.842438   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:52:42.842479   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:52:42.853336   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:52:42.861893   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:42.966759   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:52:43.051148   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:52:43.051245   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:52:43.055605   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:52:43.055660   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:52:43.058970   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:52:43.092206   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:52:43.092300   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.119216   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:52:43.149822   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:52:43.150920   37715 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 10:52:43.153539   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.153876   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:52:43.153903   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:52:43.154148   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:52:43.157775   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:43.169819   37715 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 10:52:43.169924   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:52:43.169983   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:43.198885   37715 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 10:52:43.198949   37715 ssh_runner.go:195] Run: which lz4
	I1104 10:52:43.202346   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1104 10:52:43.202439   37715 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 10:52:43.206081   37715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 10:52:43.206107   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 10:52:44.348916   37715 crio.go:462] duration metric: took 1.146501805s to copy over tarball
	I1104 10:52:44.348982   37715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 10:52:46.326500   37715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97746722s)
	I1104 10:52:46.326527   37715 crio.go:469] duration metric: took 1.977583171s to extract the tarball
	I1104 10:52:46.326535   37715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 10:52:46.361867   37715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 10:52:46.402887   37715 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 10:52:46.402909   37715 cache_images.go:84] Images are preloaded, skipping loading
	I1104 10:52:46.402919   37715 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 10:52:46.403024   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:52:46.403102   37715 ssh_runner.go:195] Run: crio config
	I1104 10:52:46.448114   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:46.448134   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:46.448143   37715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 10:52:46.448161   37715 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 10:52:46.448276   37715 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 10:52:46.448297   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:52:46.448333   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:52:46.464928   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:52:46.465022   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:52:46.465069   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:52:46.473864   37715 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 10:52:46.473931   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 10:52:46.482366   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 10:52:46.497386   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:52:46.512146   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 10:52:46.528415   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1104 10:52:46.544798   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:52:46.548212   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:52:46.559488   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:52:46.692494   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:52:46.708806   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 10:52:46.708830   37715 certs.go:194] generating shared ca certs ...
	I1104 10:52:46.708849   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.709027   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:52:46.709089   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:52:46.709102   37715 certs.go:256] generating profile certs ...
	I1104 10:52:46.709156   37715 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:52:46.709175   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt with IP's: []
	I1104 10:52:46.835505   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt ...
	I1104 10:52:46.835534   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt: {Name:mk61f73d1cdbaea56c4e3a41bf4d8a8e998c4601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835713   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key ...
	I1104 10:52:46.835728   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key: {Name:mk3a1e70b98b06ffcf80cad3978790ca4b634404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.835832   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66
	I1104 10:52:46.835851   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.254]
	I1104 10:52:46.955700   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 ...
	I1104 10:52:46.955730   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66: {Name:mk7e52761b5f3a6915e1cf90cd8ace0ff40a1698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.955903   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 ...
	I1104 10:52:46.955919   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66: {Name:mk473e5ea437641c8d6be7c8c672068a3ffc879a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:46.956011   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:52:46.956221   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.db135e66 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:52:46.956356   37715 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:52:46.956379   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt with IP's: []
	I1104 10:52:47.101236   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt ...
	I1104 10:52:47.101269   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt: {Name:mk407ac3d668cf899822db436da4d41618f60b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101451   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key ...
	I1104 10:52:47.101466   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key: {Name:mk67291900fae9d34a6dbb5f9ac6f9eff95090cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:52:47.101560   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:52:47.101583   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:52:47.101600   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:52:47.101617   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:52:47.101636   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:52:47.101656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:52:47.101675   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:52:47.101692   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:52:47.101753   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:52:47.101799   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:52:47.101812   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:52:47.101846   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:52:47.101884   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:52:47.101916   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:52:47.101975   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:52:47.102014   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.102035   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.102054   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.102621   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:52:47.126053   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:52:47.148030   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:52:47.169097   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:52:47.190790   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 10:52:47.211485   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 10:52:47.233064   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:52:47.254438   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:52:47.275584   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:52:47.296496   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:52:47.316993   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:52:47.338085   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 10:52:47.352830   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:52:47.357992   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:52:47.367171   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371139   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.371175   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:52:47.376056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:52:47.385217   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:52:47.394305   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398184   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.398229   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:52:47.403221   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:52:47.412407   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:52:47.421725   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425673   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.425724   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:52:47.430774   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:52:47.442891   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:52:47.448916   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:52:47.448963   37715 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:52:47.449026   37715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 10:52:47.449081   37715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 10:52:47.493313   37715 cri.go:89] found id: ""
	I1104 10:52:47.493388   37715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 10:52:47.505853   37715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 10:52:47.514358   37715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 10:52:47.522614   37715 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 10:52:47.522633   37715 kubeadm.go:157] found existing configuration files:
	
	I1104 10:52:47.522685   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 10:52:47.530458   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 10:52:47.530497   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 10:52:47.538766   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 10:52:47.546614   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 10:52:47.546656   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 10:52:47.554873   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.562800   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 10:52:47.562860   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 10:52:47.571095   37715 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 10:52:47.578946   37715 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 10:52:47.578986   37715 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 10:52:47.587002   37715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 10:52:47.774250   37715 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 10:52:59.162857   37715 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1104 10:52:59.162909   37715 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 10:52:59.162992   37715 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 10:52:59.163126   37715 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 10:52:59.163235   37715 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1104 10:52:59.163321   37715 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 10:52:59.164884   37715 out.go:235]   - Generating certificates and keys ...
	I1104 10:52:59.164965   37715 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 10:52:59.165051   37715 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 10:52:59.165154   37715 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 10:52:59.165262   37715 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 10:52:59.165355   37715 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 10:52:59.165433   37715 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 10:52:59.165512   37715 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 10:52:59.165644   37715 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165719   37715 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 10:52:59.165854   37715 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-931571 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I1104 10:52:59.165939   37715 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 10:52:59.166039   37715 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 10:52:59.166120   37715 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 10:52:59.166198   37715 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 10:52:59.166277   37715 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 10:52:59.166352   37715 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1104 10:52:59.166437   37715 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 10:52:59.166524   37715 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 10:52:59.166602   37715 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 10:52:59.166715   37715 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 10:52:59.166813   37715 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 10:52:59.168314   37715 out.go:235]   - Booting up control plane ...
	I1104 10:52:59.168430   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 10:52:59.168528   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 10:52:59.168619   37715 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 10:52:59.168745   37715 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 10:52:59.168864   37715 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 10:52:59.168907   37715 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 10:52:59.169020   37715 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1104 10:52:59.169142   37715 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1104 10:52:59.169244   37715 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501850183s
	I1104 10:52:59.169346   37715 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1104 10:52:59.169435   37715 kubeadm.go:310] [api-check] The API server is healthy after 5.721436597s
	I1104 10:52:59.169568   37715 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1104 10:52:59.169699   37715 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1104 10:52:59.169786   37715 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1104 10:52:59.169979   37715 kubeadm.go:310] [mark-control-plane] Marking the node ha-931571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1104 10:52:59.170060   37715 kubeadm.go:310] [bootstrap-token] Using token: x3krps.xtycqe6w7psx61o7
	I1104 10:52:59.171278   37715 out.go:235]   - Configuring RBAC rules ...
	I1104 10:52:59.171366   37715 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1104 10:52:59.171442   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1104 10:52:59.171566   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1104 10:52:59.171689   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1104 10:52:59.171828   37715 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1104 10:52:59.171935   37715 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1104 10:52:59.172086   37715 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1104 10:52:59.172158   37715 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1104 10:52:59.172220   37715 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1104 10:52:59.172232   37715 kubeadm.go:310] 
	I1104 10:52:59.172322   37715 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1104 10:52:59.172332   37715 kubeadm.go:310] 
	I1104 10:52:59.172461   37715 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1104 10:52:59.172471   37715 kubeadm.go:310] 
	I1104 10:52:59.172512   37715 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1104 10:52:59.172591   37715 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1104 10:52:59.172657   37715 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1104 10:52:59.172671   37715 kubeadm.go:310] 
	I1104 10:52:59.172727   37715 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1104 10:52:59.172733   37715 kubeadm.go:310] 
	I1104 10:52:59.172772   37715 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1104 10:52:59.172780   37715 kubeadm.go:310] 
	I1104 10:52:59.172823   37715 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1104 10:52:59.172919   37715 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1104 10:52:59.173013   37715 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1104 10:52:59.173027   37715 kubeadm.go:310] 
	I1104 10:52:59.173126   37715 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1104 10:52:59.173242   37715 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1104 10:52:59.173250   37715 kubeadm.go:310] 
	I1104 10:52:59.173349   37715 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173475   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 \
	I1104 10:52:59.173512   37715 kubeadm.go:310] 	--control-plane 
	I1104 10:52:59.173521   37715 kubeadm.go:310] 
	I1104 10:52:59.173615   37715 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1104 10:52:59.173622   37715 kubeadm.go:310] 
	I1104 10:52:59.173728   37715 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x3krps.xtycqe6w7psx61o7 \
	I1104 10:52:59.173851   37715 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 
	I1104 10:52:59.173864   37715 cni.go:84] Creating CNI manager for ""
	I1104 10:52:59.173870   37715 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1104 10:52:59.175270   37715 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1104 10:52:59.176515   37715 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1104 10:52:59.181311   37715 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1104 10:52:59.181330   37715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1104 10:52:59.199374   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1104 10:52:59.595605   37715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 10:52:59.595735   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:52:59.595746   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571 minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=true
	I1104 10:52:59.607016   37715 ops.go:34] apiserver oom_adj: -16
	I1104 10:52:59.726325   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.227237   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:00.727360   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.226637   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:01.727035   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.226405   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:02.727470   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.227029   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1104 10:53:03.337760   37715 kubeadm.go:1113] duration metric: took 3.742086638s to wait for elevateKubeSystemPrivileges
	I1104 10:53:03.337799   37715 kubeadm.go:394] duration metric: took 15.888837987s to StartCluster
	I1104 10:53:03.337821   37715 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.337905   37715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.338737   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:03.338982   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1104 10:53:03.338988   37715 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:03.339014   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:53:03.339062   37715 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 10:53:03.339167   37715 addons.go:69] Setting default-storageclass=true in profile "ha-931571"
	I1104 10:53:03.339173   37715 addons.go:69] Setting storage-provisioner=true in profile "ha-931571"
	I1104 10:53:03.339185   37715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-931571"
	I1104 10:53:03.339200   37715 addons.go:234] Setting addon storage-provisioner=true in "ha-931571"
	I1104 10:53:03.339229   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:03.339239   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.339632   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339672   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.339677   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.339713   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.360893   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I1104 10:53:03.360926   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1104 10:53:03.361436   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361473   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.361990   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362007   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362132   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.362158   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.362362   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362495   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.362668   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.362891   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.362932   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.365045   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:03.365435   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 10:53:03.365987   37715 cert_rotation.go:140] Starting client certificate rotation controller
	I1104 10:53:03.366272   37715 addons.go:234] Setting addon default-storageclass=true in "ha-931571"
	I1104 10:53:03.366318   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:03.366699   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.366738   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.381218   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I1104 10:53:03.381322   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I1104 10:53:03.381713   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.381719   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.382205   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382227   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382357   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.382372   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.382534   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383016   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:03.383048   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:03.383535   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.383708   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.385592   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.387622   37715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 10:53:03.388963   37715 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.388985   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 10:53:03.389004   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.392017   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392435   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.392480   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.392570   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.392752   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.392874   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.393020   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.398269   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I1104 10:53:03.398748   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:03.399262   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:03.399294   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:03.399614   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:03.399786   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:03.401287   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:03.401486   37715 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.401502   37715 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 10:53:03.401529   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:03.404218   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404573   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:03.404595   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:03.404677   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:03.404848   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:03.404981   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:03.405135   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:03.489842   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1104 10:53:03.554612   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 10:53:03.583845   37715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 10:53:03.952361   37715 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1104 10:53:03.952436   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952460   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952742   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952762   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.952762   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:03.952772   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.952781   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.952966   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.952981   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.953045   37715 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1104 10:53:03.953065   37715 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1104 10:53:03.953164   37715 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1104 10:53:03.953175   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.953187   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.953195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.960797   37715 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1104 10:53:03.961342   37715 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1104 10:53:03.961355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:03.961363   37715 round_trippers.go:473]     Content-Type: application/json
	I1104 10:53:03.961367   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:03.961369   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:03.963493   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:03.963694   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:03.963715   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:03.964004   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:03.964021   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:03.964021   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.222705   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.222735   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223063   37715 main.go:141] libmachine: (ha-931571) DBG | Closing plugin on server side
	I1104 10:53:04.223090   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223120   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.223137   37715 main.go:141] libmachine: Making call to close driver server
	I1104 10:53:04.223149   37715 main.go:141] libmachine: (ha-931571) Calling .Close
	I1104 10:53:04.223361   37715 main.go:141] libmachine: Successfully made call to close driver server
	I1104 10:53:04.223375   37715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 10:53:04.225261   37715 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1104 10:53:04.226730   37715 addons.go:510] duration metric: took 887.697522ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1104 10:53:04.226762   37715 start.go:246] waiting for cluster config update ...
	I1104 10:53:04.226778   37715 start.go:255] writing updated cluster config ...
	I1104 10:53:04.228532   37715 out.go:201] 
	I1104 10:53:04.229911   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:04.229982   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.231623   37715 out.go:177] * Starting "ha-931571-m02" control-plane node in "ha-931571" cluster
	I1104 10:53:04.233345   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:53:04.233368   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:53:04.233465   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:53:04.233476   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:53:04.233547   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:04.233880   37715 start.go:360] acquireMachinesLock for ha-931571-m02: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:53:04.233922   37715 start.go:364] duration metric: took 22.549µs to acquireMachinesLock for "ha-931571-m02"
	I1104 10:53:04.233935   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:04.234001   37715 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1104 10:53:04.235719   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:53:04.235815   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:04.235858   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:04.250864   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I1104 10:53:04.251327   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:04.251891   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:04.251920   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:04.252265   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:04.252475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:04.252609   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:04.252797   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:53:04.252829   37715 client.go:168] LocalClient.Create starting
	I1104 10:53:04.252866   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:53:04.252907   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.252928   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.252995   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:53:04.253023   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:53:04.253038   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:53:04.253066   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:53:04.253077   37715 main.go:141] libmachine: (ha-931571-m02) Calling .PreCreateCheck
	I1104 10:53:04.253220   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:04.253654   37715 main.go:141] libmachine: Creating machine...
	I1104 10:53:04.253672   37715 main.go:141] libmachine: (ha-931571-m02) Calling .Create
	I1104 10:53:04.253800   37715 main.go:141] libmachine: (ha-931571-m02) Creating KVM machine...
	I1104 10:53:04.254992   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing default KVM network
	I1104 10:53:04.255150   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found existing private KVM network mk-ha-931571
	I1104 10:53:04.255299   37715 main.go:141] libmachine: (ha-931571-m02) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.255322   37715 main.go:141] libmachine: (ha-931571-m02) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:53:04.255385   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.255280   38069 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.255479   37715 main.go:141] libmachine: (ha-931571-m02) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:53:04.500647   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.500534   38069 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa...
	I1104 10:53:04.797066   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.796939   38069 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk...
	I1104 10:53:04.797094   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing magic tar header
	I1104 10:53:04.797104   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Writing SSH key tar header
	I1104 10:53:04.797111   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:04.797059   38069 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 ...
	I1104 10:53:04.797220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02
	I1104 10:53:04.797261   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02 (perms=drwx------)
	I1104 10:53:04.797271   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:53:04.797289   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:53:04.797298   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:53:04.797310   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:53:04.797318   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:53:04.797331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Checking permissions on dir: /home
	I1104 10:53:04.797349   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:53:04.797357   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Skipping /home - not owner
	I1104 10:53:04.797376   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:53:04.797389   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:53:04.797401   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:53:04.797412   37715 main.go:141] libmachine: (ha-931571-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:53:04.797440   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:04.798407   37715 main.go:141] libmachine: (ha-931571-m02) define libvirt domain using xml: 
	I1104 10:53:04.798425   37715 main.go:141] libmachine: (ha-931571-m02) <domain type='kvm'>
	I1104 10:53:04.798436   37715 main.go:141] libmachine: (ha-931571-m02)   <name>ha-931571-m02</name>
	I1104 10:53:04.798449   37715 main.go:141] libmachine: (ha-931571-m02)   <memory unit='MiB'>2200</memory>
	I1104 10:53:04.798465   37715 main.go:141] libmachine: (ha-931571-m02)   <vcpu>2</vcpu>
	I1104 10:53:04.798472   37715 main.go:141] libmachine: (ha-931571-m02)   <features>
	I1104 10:53:04.798477   37715 main.go:141] libmachine: (ha-931571-m02)     <acpi/>
	I1104 10:53:04.798481   37715 main.go:141] libmachine: (ha-931571-m02)     <apic/>
	I1104 10:53:04.798486   37715 main.go:141] libmachine: (ha-931571-m02)     <pae/>
	I1104 10:53:04.798492   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798498   37715 main.go:141] libmachine: (ha-931571-m02)   </features>
	I1104 10:53:04.798502   37715 main.go:141] libmachine: (ha-931571-m02)   <cpu mode='host-passthrough'>
	I1104 10:53:04.798507   37715 main.go:141] libmachine: (ha-931571-m02)   
	I1104 10:53:04.798512   37715 main.go:141] libmachine: (ha-931571-m02)   </cpu>
	I1104 10:53:04.798522   37715 main.go:141] libmachine: (ha-931571-m02)   <os>
	I1104 10:53:04.798534   37715 main.go:141] libmachine: (ha-931571-m02)     <type>hvm</type>
	I1104 10:53:04.798546   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='cdrom'/>
	I1104 10:53:04.798552   37715 main.go:141] libmachine: (ha-931571-m02)     <boot dev='hd'/>
	I1104 10:53:04.798564   37715 main.go:141] libmachine: (ha-931571-m02)     <bootmenu enable='no'/>
	I1104 10:53:04.798571   37715 main.go:141] libmachine: (ha-931571-m02)   </os>
	I1104 10:53:04.798580   37715 main.go:141] libmachine: (ha-931571-m02)   <devices>
	I1104 10:53:04.798585   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='cdrom'>
	I1104 10:53:04.798596   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/boot2docker.iso'/>
	I1104 10:53:04.798601   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hdc' bus='scsi'/>
	I1104 10:53:04.798630   37715 main.go:141] libmachine: (ha-931571-m02)       <readonly/>
	I1104 10:53:04.798653   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798678   37715 main.go:141] libmachine: (ha-931571-m02)     <disk type='file' device='disk'>
	I1104 10:53:04.798702   37715 main.go:141] libmachine: (ha-931571-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:53:04.798718   37715 main.go:141] libmachine: (ha-931571-m02)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/ha-931571-m02.rawdisk'/>
	I1104 10:53:04.798732   37715 main.go:141] libmachine: (ha-931571-m02)       <target dev='hda' bus='virtio'/>
	I1104 10:53:04.798747   37715 main.go:141] libmachine: (ha-931571-m02)     </disk>
	I1104 10:53:04.798763   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798783   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='mk-ha-931571'/>
	I1104 10:53:04.798799   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798811   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798822   37715 main.go:141] libmachine: (ha-931571-m02)     <interface type='network'>
	I1104 10:53:04.798835   37715 main.go:141] libmachine: (ha-931571-m02)       <source network='default'/>
	I1104 10:53:04.798846   37715 main.go:141] libmachine: (ha-931571-m02)       <model type='virtio'/>
	I1104 10:53:04.798858   37715 main.go:141] libmachine: (ha-931571-m02)     </interface>
	I1104 10:53:04.798868   37715 main.go:141] libmachine: (ha-931571-m02)     <serial type='pty'>
	I1104 10:53:04.798881   37715 main.go:141] libmachine: (ha-931571-m02)       <target port='0'/>
	I1104 10:53:04.798892   37715 main.go:141] libmachine: (ha-931571-m02)     </serial>
	I1104 10:53:04.798901   37715 main.go:141] libmachine: (ha-931571-m02)     <console type='pty'>
	I1104 10:53:04.798910   37715 main.go:141] libmachine: (ha-931571-m02)       <target type='serial' port='0'/>
	I1104 10:53:04.798916   37715 main.go:141] libmachine: (ha-931571-m02)     </console>
	I1104 10:53:04.798925   37715 main.go:141] libmachine: (ha-931571-m02)     <rng model='virtio'>
	I1104 10:53:04.798938   37715 main.go:141] libmachine: (ha-931571-m02)       <backend model='random'>/dev/random</backend>
	I1104 10:53:04.798948   37715 main.go:141] libmachine: (ha-931571-m02)     </rng>
	I1104 10:53:04.798958   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798967   37715 main.go:141] libmachine: (ha-931571-m02)     
	I1104 10:53:04.798977   37715 main.go:141] libmachine: (ha-931571-m02)   </devices>
	I1104 10:53:04.798990   37715 main.go:141] libmachine: (ha-931571-m02) </domain>
	I1104 10:53:04.799001   37715 main.go:141] libmachine: (ha-931571-m02) 
	I1104 10:53:04.805977   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5e:b4:47 in network default
	I1104 10:53:04.806519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:04.806536   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring networks are active...
	I1104 10:53:04.807291   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network default is active
	I1104 10:53:04.807614   37715 main.go:141] libmachine: (ha-931571-m02) Ensuring network mk-ha-931571 is active
	I1104 10:53:04.807998   37715 main.go:141] libmachine: (ha-931571-m02) Getting domain xml...
	I1104 10:53:04.808751   37715 main.go:141] libmachine: (ha-931571-m02) Creating domain...
	I1104 10:53:06.037689   37715 main.go:141] libmachine: (ha-931571-m02) Waiting to get IP...
	I1104 10:53:06.038416   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.038827   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.038856   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.038804   38069 retry.go:31] will retry after 244.727015ms: waiting for machine to come up
	I1104 10:53:06.285395   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.285853   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.285879   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.285815   38069 retry.go:31] will retry after 291.944786ms: waiting for machine to come up
	I1104 10:53:06.579413   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:06.579939   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:06.579964   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:06.579896   38069 retry.go:31] will retry after 446.911163ms: waiting for machine to come up
	I1104 10:53:07.028452   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.028838   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.028870   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.028792   38069 retry.go:31] will retry after 472.390697ms: waiting for machine to come up
	I1104 10:53:07.502204   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:07.502568   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:07.502592   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:07.502526   38069 retry.go:31] will retry after 662.15145ms: waiting for machine to come up
	I1104 10:53:08.166152   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:08.166583   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:08.166609   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:08.166538   38069 retry.go:31] will retry after 886.374206ms: waiting for machine to come up
	I1104 10:53:09.054240   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:09.054689   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:09.054715   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:09.054670   38069 retry.go:31] will retry after 963.475989ms: waiting for machine to come up
	I1104 10:53:10.020142   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:10.020587   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:10.020630   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:10.020571   38069 retry.go:31] will retry after 1.332433034s: waiting for machine to come up
	I1104 10:53:11.354908   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:11.355309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:11.355331   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:11.355273   38069 retry.go:31] will retry after 1.652203867s: waiting for machine to come up
	I1104 10:53:13.009876   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:13.010297   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:13.010319   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:13.010254   38069 retry.go:31] will retry after 2.320402176s: waiting for machine to come up
	I1104 10:53:15.332045   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:15.332414   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:15.332441   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:15.332356   38069 retry.go:31] will retry after 2.652871808s: waiting for machine to come up
	I1104 10:53:17.987774   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:17.988211   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:17.988231   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:17.988174   38069 retry.go:31] will retry after 3.518414185s: waiting for machine to come up
	I1104 10:53:21.508515   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:21.508901   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find current IP address of domain ha-931571-m02 in network mk-ha-931571
	I1104 10:53:21.508926   37715 main.go:141] libmachine: (ha-931571-m02) DBG | I1104 10:53:21.508866   38069 retry.go:31] will retry after 4.345855832s: waiting for machine to come up
	I1104 10:53:25.856753   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857143   37715 main.go:141] libmachine: (ha-931571-m02) Found IP for machine: 192.168.39.245
	I1104 10:53:25.857167   37715 main.go:141] libmachine: (ha-931571-m02) Reserving static IP address...
	I1104 10:53:25.857181   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.857621   37715 main.go:141] libmachine: (ha-931571-m02) DBG | unable to find host DHCP lease matching {name: "ha-931571-m02", mac: "52:54:00:5c:86:6b", ip: "192.168.39.245"} in network mk-ha-931571
	I1104 10:53:25.931250   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Getting to WaitForSSH function...
	I1104 10:53:25.931278   37715 main.go:141] libmachine: (ha-931571-m02) Reserved static IP address: 192.168.39.245
	I1104 10:53:25.931296   37715 main.go:141] libmachine: (ha-931571-m02) Waiting for SSH to be available...
	I1104 10:53:25.933968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934431   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:25.934489   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:25.934562   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH client type: external
	I1104 10:53:25.934591   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa (-rw-------)
	I1104 10:53:25.934652   37715 main.go:141] libmachine: (ha-931571-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:53:25.934674   37715 main.go:141] libmachine: (ha-931571-m02) DBG | About to run SSH command:
	I1104 10:53:25.934692   37715 main.go:141] libmachine: (ha-931571-m02) DBG | exit 0
	I1104 10:53:26.068913   37715 main.go:141] libmachine: (ha-931571-m02) DBG | SSH cmd err, output: <nil>: 
	I1104 10:53:26.069182   37715 main.go:141] libmachine: (ha-931571-m02) KVM machine creation complete!
	I1104 10:53:26.069569   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:26.070061   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070245   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:26.070421   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:53:26.070438   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 10:53:26.071961   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:53:26.071975   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:53:26.071980   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:53:26.071985   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.074060   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074383   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.074403   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.074574   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.074737   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074878   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.074976   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.075126   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.075361   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.075377   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:53:26.184350   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.184379   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:53:26.184395   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.186866   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187176   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.187196   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.187362   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.187546   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187699   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.187825   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.187985   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.188193   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.188204   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:53:26.301614   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:53:26.301685   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:53:26.301699   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:53:26.301711   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.301942   37715 buildroot.go:166] provisioning hostname "ha-931571-m02"
	I1104 10:53:26.301964   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.302139   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.304767   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305309   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.305334   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.305470   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.305626   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305790   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.305931   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.306093   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.306297   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.306310   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m02 && echo "ha-931571-m02" | sudo tee /etc/hostname
	I1104 10:53:26.430814   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m02
	
	I1104 10:53:26.430842   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.433622   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.433925   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.433953   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.434109   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.434330   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434473   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.434584   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.434716   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:26.434907   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:26.434931   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:53:26.553495   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:53:26.553519   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:53:26.553534   37715 buildroot.go:174] setting up certificates
	I1104 10:53:26.553543   37715 provision.go:84] configureAuth start
	I1104 10:53:26.553551   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetMachineName
	I1104 10:53:26.553773   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:26.556203   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556500   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.556519   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.556610   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.558806   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559168   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.559194   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.559467   37715 provision.go:143] copyHostCerts
	I1104 10:53:26.559496   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559535   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:53:26.559546   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:53:26.559623   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:53:26.559707   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559732   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:53:26.559741   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:53:26.559778   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:53:26.559830   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559853   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:53:26.559865   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:53:26.559899   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:53:26.559968   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m02 san=[127.0.0.1 192.168.39.245 ha-931571-m02 localhost minikube]
	I1104 10:53:26.827173   37715 provision.go:177] copyRemoteCerts
	I1104 10:53:26.827226   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:53:26.827248   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:26.829975   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830343   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:26.830372   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:26.830576   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:26.830763   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:26.830912   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:26.831022   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:26.923318   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:53:26.923390   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:53:26.950708   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:53:26.950773   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:53:26.976975   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:53:26.977045   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 10:53:27.002230   37715 provision.go:87] duration metric: took 448.676469ms to configureAuth
	I1104 10:53:27.002252   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:53:27.002404   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:27.002475   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.005273   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005618   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.005646   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.005772   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.005978   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006123   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.006279   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.006465   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.006627   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.006641   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:53:27.235271   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:53:27.235297   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:53:27.235305   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetURL
	I1104 10:53:27.236550   37715 main.go:141] libmachine: (ha-931571-m02) DBG | Using libvirt version 6000000
	I1104 10:53:27.238826   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239189   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.239220   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.239401   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:53:27.239418   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:53:27.239426   37715 client.go:171] duration metric: took 22.986586779s to LocalClient.Create
	I1104 10:53:27.239451   37715 start.go:167] duration metric: took 22.986656312s to libmachine.API.Create "ha-931571"
	I1104 10:53:27.239472   37715 start.go:293] postStartSetup for "ha-931571-m02" (driver="kvm2")
	I1104 10:53:27.239488   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:53:27.239510   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.239721   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:53:27.239747   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.241968   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242332   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.242352   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.242491   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.242658   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.242769   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.242872   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.327061   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:53:27.331021   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:53:27.331050   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:53:27.331133   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:53:27.331207   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:53:27.331218   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:53:27.331300   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:53:27.341280   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:27.363737   37715 start.go:296] duration metric: took 124.248011ms for postStartSetup
	I1104 10:53:27.363783   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetConfigRaw
	I1104 10:53:27.364431   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.367195   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367660   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.367698   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.367926   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:53:27.368121   37715 start.go:128] duration metric: took 23.134111471s to createHost
	I1104 10:53:27.368147   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.370510   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.370846   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.370881   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.371043   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.371226   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371432   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.371573   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.371728   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:53:27.371899   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I1104 10:53:27.371912   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:53:27.485557   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717607.449108710
	
	I1104 10:53:27.485578   37715 fix.go:216] guest clock: 1730717607.449108710
	I1104 10:53:27.485585   37715 fix.go:229] Guest: 2024-11-04 10:53:27.44910871 +0000 UTC Remote: 2024-11-04 10:53:27.368133628 +0000 UTC m=+66.039651871 (delta=80.975082ms)
	I1104 10:53:27.485600   37715 fix.go:200] guest clock delta is within tolerance: 80.975082ms
	I1104 10:53:27.485605   37715 start.go:83] releasing machines lock for "ha-931571-m02", held for 23.251676872s
	I1104 10:53:27.485620   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.485857   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:27.488648   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.489014   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.489041   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.491305   37715 out.go:177] * Found network options:
	I1104 10:53:27.492602   37715 out.go:177]   - NO_PROXY=192.168.39.67
	W1104 10:53:27.493715   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.493752   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494253   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494447   37715 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 10:53:27.494556   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:53:27.494595   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	W1104 10:53:27.494597   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:53:27.494657   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:53:27.494679   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 10:53:27.497460   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497637   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497850   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.497871   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.497991   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:27.498003   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:27.498025   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498232   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498254   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 10:53:27.498403   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 10:53:27.498437   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498538   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.498550   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 10:53:27.498773   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 10:53:27.735755   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:53:27.742047   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:53:27.742118   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:53:27.757546   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:53:27.757568   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:53:27.757654   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:53:27.775341   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:53:27.789267   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:53:27.789322   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:53:27.802395   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:53:27.815846   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:53:27.932464   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:53:28.072054   37715 docker.go:233] disabling docker service ...
	I1104 10:53:28.072113   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:53:28.085955   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:53:28.098515   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:53:28.231393   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:53:28.348075   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:53:28.360668   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:53:28.377621   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:53:28.377680   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.387614   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:53:28.387678   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.397527   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.406950   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.416691   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:53:28.426696   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.436536   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.452706   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:53:28.462377   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:53:28.471479   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:53:28.471541   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:53:28.484536   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:53:28.493914   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:28.602971   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:53:28.692433   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:53:28.692522   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:53:28.696783   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:53:28.696822   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:53:28.700013   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:53:28.734056   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:53:28.734128   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.760475   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:53:28.789783   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:53:28.791233   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:53:28.792582   37715 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 10:53:28.795120   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795494   37715 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:53:18 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 10:53:28.795520   37715 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 10:53:28.795759   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:53:28.799797   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:28.811896   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:53:28.812115   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:28.812360   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.812401   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.826717   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I1104 10:53:28.827181   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.827674   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.827693   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.828004   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.828173   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:53:28.829698   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:28.829978   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:28.830013   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:28.844302   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I1104 10:53:28.844715   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:28.845157   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:28.845180   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:28.845561   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:28.845729   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:28.845886   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.245
	I1104 10:53:28.845896   37715 certs.go:194] generating shared ca certs ...
	I1104 10:53:28.845908   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.846013   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:53:28.846050   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:53:28.846056   37715 certs.go:256] generating profile certs ...
	I1104 10:53:28.846117   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:53:28.846138   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a
	I1104 10:53:28.846149   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.254]
	I1104 10:53:28.973533   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a ...
	I1104 10:53:28.973558   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a: {Name:mk251fe01c9791f2c1df00673ac1979d7532e3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973716   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a ...
	I1104 10:53:28.973729   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a: {Name:mkef3dc2affbfe3d37549d8d043a12581b7267b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:53:28.973806   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:53:28.973935   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.44df713a -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:53:28.974053   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:53:28.974067   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:53:28.974079   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:53:28.974092   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:53:28.974103   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:53:28.974114   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:53:28.974127   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:53:28.974139   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:53:28.974151   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:53:28.974191   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:53:28.974219   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:53:28.974228   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:53:28.974249   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:53:28.974273   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:53:28.974294   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:53:28.974329   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:53:28.974353   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:53:28.974366   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:53:28.974379   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:28.974408   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:28.977338   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977742   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:28.977776   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:28.977945   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:28.978138   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:28.978269   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:28.978403   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:29.049594   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:53:29.054655   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:53:29.065445   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:53:29.070822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:53:29.082304   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:53:29.086563   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:53:29.098922   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:53:29.103085   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:53:29.113035   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:53:29.117456   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:53:29.127764   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:53:29.131629   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:53:29.143522   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:53:29.167376   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:53:29.189625   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:53:29.212768   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:53:29.235967   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1104 10:53:29.263247   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:53:29.285302   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:53:29.306703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:53:29.328748   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:53:29.350648   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:53:29.372264   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:53:29.395406   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:53:29.410777   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:53:29.427042   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:53:29.443978   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:53:29.460125   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:53:29.475628   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:53:29.491185   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:53:29.507040   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:53:29.512376   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:53:29.522746   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526894   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.526950   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:53:29.532557   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:53:29.543248   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:53:29.553302   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557429   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.557475   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:53:29.562752   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:53:29.573585   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:53:29.583479   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587879   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.587928   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:53:29.594267   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:53:29.605746   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:53:29.609628   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:53:29.609689   37715 kubeadm.go:934] updating node {m02 192.168.39.245 8443 v1.31.2 crio true true} ...
	I1104 10:53:29.609774   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:53:29.609799   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:53:29.609830   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:53:29.626833   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:53:29.626905   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:53:29.626952   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.636985   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:53:29.637050   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:53:29.646235   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:53:29.646266   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.646297   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1104 10:53:29.646318   37715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1104 10:53:29.646321   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:53:29.650548   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:53:29.650575   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:53:30.395926   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.396007   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:53:30.400715   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:53:30.400746   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:53:30.426541   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:53:30.447212   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.447328   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:53:30.458650   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:53:30.458689   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:53:30.919365   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:53:30.928897   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1104 10:53:30.946677   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:53:30.963726   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:53:30.981653   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:53:30.985571   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:53:30.998898   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:31.132385   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:31.149804   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:53:31.150291   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:53:31.150345   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:53:31.165094   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I1104 10:53:31.165587   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:53:31.166163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:53:31.166186   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:53:31.166555   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:53:31.166779   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:53:31.166958   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:53:31.167051   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:53:31.167067   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:53:31.169771   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170152   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:53:31.170182   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:53:31.170376   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:53:31.170562   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:53:31.170687   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:53:31.170781   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:53:31.306325   37715 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:31.306377   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I1104 10:53:52.004440   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmocbz.ds2v3q10rcir1aso --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (20.698039868s)
	I1104 10:53:52.004481   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:53:52.565954   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m02 minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:53:52.722802   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:53:52.847701   37715 start.go:319] duration metric: took 21.680738209s to joinCluster
	I1104 10:53:52.847788   37715 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:53:52.848131   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:53:52.849508   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:53:52.850857   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:53:53.114403   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:53:53.138620   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:53:53.138881   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:53:53.138942   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:53:53.139141   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:53:53.139247   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.139257   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.139269   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.139278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.152136   37715 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1104 10:53:53.639369   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:53.639392   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:53.639401   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:53.639405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:53.643203   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:54.140047   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.140070   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.140084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.147092   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:53:54.639335   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:54.639355   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:54.639363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:54.639367   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:54.642506   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.140245   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.140265   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.140273   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.140277   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.143824   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:55.144458   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:55.639804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:55.639830   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:55.639841   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:55.639846   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:55.643096   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:56.140054   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.140078   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.140089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.140095   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.142960   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:56.639891   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:56.639912   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:56.639923   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:56.639928   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:56.642755   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.139690   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.139713   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.139725   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.139730   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.143324   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:57.639441   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:57.639460   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:57.639469   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:57.639473   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:57.642433   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:53:57.642947   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:53:58.140368   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.140388   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.140399   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.140404   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.144117   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:58.640193   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:58.640215   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:58.640223   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:58.640227   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:58.643667   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.139304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.139323   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.139331   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.139335   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.142878   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:53:59.639323   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:53:59.639344   37715 round_trippers.go:469] Request Headers:
	I1104 10:53:59.639353   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:53:59.639357   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:53:59.642391   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.140288   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.140323   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.140328   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.143357   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:00.143948   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:00.639324   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:00.639348   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:00.639358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:00.639365   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:00.643179   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.140315   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.140337   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.140345   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.140349   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.143491   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:01.639485   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:01.639510   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:01.639517   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:01.639522   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:01.642450   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:02.140259   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.140291   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.140299   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.140304   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.143695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:02.144128   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:02.639414   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:02.639433   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:02.639442   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:02.639447   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:02.642409   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.140294   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.140314   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.140327   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.140333   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.143301   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:03.639404   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:03.639426   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:03.639437   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:03.639445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:03.642367   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.139716   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.139740   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.139750   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.139754   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.143000   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:04.640219   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:04.640245   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:04.640256   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:04.640262   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:04.643232   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:04.643667   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:05.140138   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.140162   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.140173   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.140178   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.142993   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:05.639755   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:05.639775   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:05.639783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:05.639802   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:05.643475   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.139372   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.139394   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.139402   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.139405   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.142509   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:06.639413   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:06.639442   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:06.639451   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:06.639456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:06.642592   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.139655   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.139684   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.139694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.139699   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.143170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:07.143728   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:07.640208   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:07.640228   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:07.640235   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:07.640240   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:07.643154   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.140228   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.140261   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.140273   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.140278   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.142997   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:08.639828   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:08.639854   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:08.639862   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:08.639866   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:08.643244   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.140126   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.140153   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.140166   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.140172   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.143278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:09.143950   37715 node_ready.go:53] node "ha-931571-m02" has status "Ready":"False"
	I1104 10:54:09.639588   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:09.639610   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:09.639618   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:09.639623   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:09.642343   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.139875   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.139898   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.139905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.139909   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.143037   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.640013   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.640033   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.640042   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.640045   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.643833   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.644423   37715 node_ready.go:49] node "ha-931571-m02" has status "Ready":"True"
	I1104 10:54:10.644446   37715 node_ready.go:38] duration metric: took 17.505281339s for node "ha-931571-m02" to be "Ready" ...
	I1104 10:54:10.644459   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:10.644564   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:10.644577   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.644587   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.644591   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.649476   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:10.656031   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.656110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:54:10.656129   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.656138   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.656144   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.659282   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:10.659928   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.659944   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.659953   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.659958   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.662844   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.663378   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.663402   37715 pod_ready.go:82] duration metric: took 7.344091ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663423   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.663492   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:54:10.663502   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.663512   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.663521   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.666287   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.666934   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.666950   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.666957   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.666960   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.669169   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.669739   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.669760   37715 pod_ready.go:82] duration metric: took 6.3295ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669770   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.669830   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:54:10.669842   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.669852   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.669859   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.672042   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.672626   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:10.672642   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.672650   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.672653   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.674766   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.675295   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.675317   37715 pod_ready.go:82] duration metric: took 5.539368ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675329   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.675390   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:54:10.675398   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.675405   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.675410   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.677591   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:10.678184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:10.678197   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.678204   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.678208   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.680155   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:54:10.680700   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:10.680721   37715 pod_ready.go:82] duration metric: took 5.381074ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.680737   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:10.840055   37715 request.go:632] Waited for 159.25235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840140   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:54:10.840150   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:10.840160   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:10.840171   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:10.843356   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.040534   37715 request.go:632] Waited for 196.430173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040604   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.040615   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.040623   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.040630   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.043768   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.044382   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.044403   37715 pod_ready.go:82] duration metric: took 363.65714ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.044412   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.240746   37715 request.go:632] Waited for 196.265081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240800   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:54:11.240805   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.240812   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.240823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.244055   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.441020   37715 request.go:632] Waited for 196.31895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441076   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:11.441082   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.441089   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.441092   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.443940   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.444396   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.444417   37715 pod_ready.go:82] duration metric: took 399.997294ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.444431   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.640978   37715 request.go:632] Waited for 196.455451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641045   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:54:11.641052   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.641063   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.641068   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.644104   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:11.840124   37715 request.go:632] Waited for 195.279381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:11.840180   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:11.840189   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:11.840204   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:11.843139   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:11.843784   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:11.843806   37715 pod_ready.go:82] duration metric: took 399.367004ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:11.843816   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.040826   37715 request.go:632] Waited for 196.934959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040888   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:54:12.040896   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.040905   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.040912   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.044321   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.240220   37715 request.go:632] Waited for 195.323321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:12.240302   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.240311   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.240340   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.243972   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.244423   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.244441   37715 pod_ready.go:82] duration metric: took 400.61624ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.244452   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.440627   37715 request.go:632] Waited for 196.096769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440687   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:54:12.440692   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.440700   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.440704   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.443759   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:12.640675   37715 request.go:632] Waited for 196.368451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640746   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:12.640753   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.640764   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.640771   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.645533   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:12.646078   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:12.646098   37715 pod_ready.go:82] duration metric: took 401.639494ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.646111   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:12.840342   37715 request.go:632] Waited for 194.16235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840395   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:54:12.840400   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:12.840407   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:12.840413   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:12.844505   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:13.040627   37715 request.go:632] Waited for 195.405277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040697   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.040706   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.040713   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.040717   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.043654   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.044440   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.044461   37715 pod_ready.go:82] duration metric: took 398.343689ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.044472   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.240500   37715 request.go:632] Waited for 195.966375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240580   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:54:13.240589   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.240599   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.240606   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.243607   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:54:13.440419   37715 request.go:632] Waited for 196.059783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440489   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:54:13.440495   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.440502   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.440507   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.443953   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.444535   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.444560   37715 pod_ready.go:82] duration metric: took 400.080635ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.444575   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.640646   37715 request.go:632] Waited for 195.95641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640702   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:54:13.640707   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.640716   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.640720   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.644170   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.840111   37715 request.go:632] Waited for 195.309512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840184   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:54:13.840189   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.840197   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.840205   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.843622   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:13.844295   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:54:13.844319   37715 pod_ready.go:82] duration metric: took 399.734957ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:54:13.844333   37715 pod_ready.go:39] duration metric: took 3.199846594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:54:13.844350   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:54:13.844417   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:54:13.858847   37715 api_server.go:72] duration metric: took 21.011018077s to wait for apiserver process to appear ...
	I1104 10:54:13.858869   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:54:13.858890   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:54:13.863051   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:54:13.863110   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:54:13.863115   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:13.863122   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:13.863126   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:13.864098   37715 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1104 10:54:13.864181   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:54:13.864195   37715 api_server.go:131] duration metric: took 5.319439ms to wait for apiserver health ...
	I1104 10:54:13.864202   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:54:14.040623   37715 request.go:632] Waited for 176.353381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040696   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.040702   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.040709   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.040714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.045262   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.050254   37715 system_pods.go:59] 17 kube-system pods found
	I1104 10:54:14.050280   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.050285   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.050289   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.050292   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.050296   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.050301   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.050305   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.050310   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.050315   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.050320   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.050327   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.050332   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.050340   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.050345   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.050354   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050364   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.050370   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.050377   37715 system_pods.go:74] duration metric: took 186.169669ms to wait for pod list to return data ...
	I1104 10:54:14.050387   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:54:14.240854   37715 request.go:632] Waited for 190.370277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240922   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:54:14.240929   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.240940   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.240963   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.244687   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.244932   37715 default_sa.go:45] found service account: "default"
	I1104 10:54:14.244952   37715 default_sa.go:55] duration metric: took 194.560071ms for default service account to be created ...
	I1104 10:54:14.244961   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:54:14.440692   37715 request.go:632] Waited for 195.67345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440751   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:54:14.440757   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.440772   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.440780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.444830   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:54:14.449745   37715 system_pods.go:86] 17 kube-system pods found
	I1104 10:54:14.449772   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:54:14.449778   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:54:14.449783   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:54:14.449789   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:54:14.449795   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:54:14.449800   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:54:14.449807   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:54:14.449812   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:54:14.449816   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:54:14.449821   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:54:14.449826   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:54:14.449834   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:54:14.449839   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:54:14.449848   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:54:14.449857   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449870   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:54:14.449878   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:54:14.449891   37715 system_pods.go:126] duration metric: took 204.923702ms to wait for k8s-apps to be running ...
	I1104 10:54:14.449903   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:54:14.449956   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:14.464950   37715 system_svc.go:56] duration metric: took 15.038755ms WaitForService to wait for kubelet
	I1104 10:54:14.464983   37715 kubeadm.go:582] duration metric: took 21.617159665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:54:14.465005   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:54:14.640444   37715 request.go:632] Waited for 175.359531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640495   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:54:14.640507   37715 round_trippers.go:469] Request Headers:
	I1104 10:54:14.640514   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:54:14.640531   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:54:14.644308   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:54:14.645138   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645162   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645172   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:54:14.645175   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:54:14.645180   37715 node_conditions.go:105] duration metric: took 180.169842ms to run NodePressure ...
	I1104 10:54:14.645191   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:54:14.645220   37715 start.go:255] writing updated cluster config ...
	I1104 10:54:14.647434   37715 out.go:201] 
	I1104 10:54:14.649030   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:14.649124   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.650881   37715 out.go:177] * Starting "ha-931571-m03" control-plane node in "ha-931571" cluster
	I1104 10:54:14.652021   37715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:54:14.652041   37715 cache.go:56] Caching tarball of preloaded images
	I1104 10:54:14.652128   37715 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 10:54:14.652138   37715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 10:54:14.652229   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:14.652384   37715 start.go:360] acquireMachinesLock for ha-931571-m03: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 10:54:14.652421   37715 start.go:364] duration metric: took 20.345µs to acquireMachinesLock for "ha-931571-m03"
	I1104 10:54:14.652439   37715 start.go:93] Provisioning new machine with config: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:14.652552   37715 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1104 10:54:14.653932   37715 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 10:54:14.654009   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:14.654042   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:14.669012   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1104 10:54:14.669516   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:14.669968   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:14.669986   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:14.670370   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:14.670550   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:14.670697   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:14.670887   37715 start.go:159] libmachine.API.Create for "ha-931571" (driver="kvm2")
	I1104 10:54:14.670919   37715 client.go:168] LocalClient.Create starting
	I1104 10:54:14.670952   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 10:54:14.670990   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671004   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671047   37715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 10:54:14.671066   37715 main.go:141] libmachine: Decoding PEM data...
	I1104 10:54:14.671074   37715 main.go:141] libmachine: Parsing certificate...
	I1104 10:54:14.671092   37715 main.go:141] libmachine: Running pre-create checks...
	I1104 10:54:14.671100   37715 main.go:141] libmachine: (ha-931571-m03) Calling .PreCreateCheck
	I1104 10:54:14.671295   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:14.671735   37715 main.go:141] libmachine: Creating machine...
	I1104 10:54:14.671748   37715 main.go:141] libmachine: (ha-931571-m03) Calling .Create
	I1104 10:54:14.671896   37715 main.go:141] libmachine: (ha-931571-m03) Creating KVM machine...
	I1104 10:54:14.673127   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing default KVM network
	I1104 10:54:14.673275   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found existing private KVM network mk-ha-931571
	I1104 10:54:14.673433   37715 main.go:141] libmachine: (ha-931571-m03) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:14.673458   37715 main.go:141] libmachine: (ha-931571-m03) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:54:14.673532   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.673413   38465 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:14.673618   37715 main.go:141] libmachine: (ha-931571-m03) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 10:54:14.913416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:14.913288   38465 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa...
	I1104 10:54:15.078787   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078642   38465 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk...
	I1104 10:54:15.078832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing magic tar header
	I1104 10:54:15.078845   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Writing SSH key tar header
	I1104 10:54:15.078858   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:15.078756   38465 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 ...
	I1104 10:54:15.078874   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03
	I1104 10:54:15.078881   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 10:54:15.078888   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03 (perms=drwx------)
	I1104 10:54:15.078896   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 10:54:15.078902   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 10:54:15.078911   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 10:54:15.078919   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 10:54:15.078931   37715 main.go:141] libmachine: (ha-931571-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 10:54:15.078951   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:15.078968   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:54:15.078978   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 10:54:15.078985   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 10:54:15.078991   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home/jenkins
	I1104 10:54:15.078997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Checking permissions on dir: /home
	I1104 10:54:15.079003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Skipping /home - not owner
	I1104 10:54:15.079942   37715 main.go:141] libmachine: (ha-931571-m03) define libvirt domain using xml: 
	I1104 10:54:15.079975   37715 main.go:141] libmachine: (ha-931571-m03) <domain type='kvm'>
	I1104 10:54:15.079986   37715 main.go:141] libmachine: (ha-931571-m03)   <name>ha-931571-m03</name>
	I1104 10:54:15.079997   37715 main.go:141] libmachine: (ha-931571-m03)   <memory unit='MiB'>2200</memory>
	I1104 10:54:15.080003   37715 main.go:141] libmachine: (ha-931571-m03)   <vcpu>2</vcpu>
	I1104 10:54:15.080007   37715 main.go:141] libmachine: (ha-931571-m03)   <features>
	I1104 10:54:15.080011   37715 main.go:141] libmachine: (ha-931571-m03)     <acpi/>
	I1104 10:54:15.080015   37715 main.go:141] libmachine: (ha-931571-m03)     <apic/>
	I1104 10:54:15.080020   37715 main.go:141] libmachine: (ha-931571-m03)     <pae/>
	I1104 10:54:15.080024   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080028   37715 main.go:141] libmachine: (ha-931571-m03)   </features>
	I1104 10:54:15.080032   37715 main.go:141] libmachine: (ha-931571-m03)   <cpu mode='host-passthrough'>
	I1104 10:54:15.080037   37715 main.go:141] libmachine: (ha-931571-m03)   
	I1104 10:54:15.080040   37715 main.go:141] libmachine: (ha-931571-m03)   </cpu>
	I1104 10:54:15.080045   37715 main.go:141] libmachine: (ha-931571-m03)   <os>
	I1104 10:54:15.080049   37715 main.go:141] libmachine: (ha-931571-m03)     <type>hvm</type>
	I1104 10:54:15.080054   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='cdrom'/>
	I1104 10:54:15.080061   37715 main.go:141] libmachine: (ha-931571-m03)     <boot dev='hd'/>
	I1104 10:54:15.080066   37715 main.go:141] libmachine: (ha-931571-m03)     <bootmenu enable='no'/>
	I1104 10:54:15.080070   37715 main.go:141] libmachine: (ha-931571-m03)   </os>
	I1104 10:54:15.080075   37715 main.go:141] libmachine: (ha-931571-m03)   <devices>
	I1104 10:54:15.080079   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='cdrom'>
	I1104 10:54:15.080088   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/boot2docker.iso'/>
	I1104 10:54:15.080096   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hdc' bus='scsi'/>
	I1104 10:54:15.080101   37715 main.go:141] libmachine: (ha-931571-m03)       <readonly/>
	I1104 10:54:15.080106   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080111   37715 main.go:141] libmachine: (ha-931571-m03)     <disk type='file' device='disk'>
	I1104 10:54:15.080119   37715 main.go:141] libmachine: (ha-931571-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 10:54:15.080127   37715 main.go:141] libmachine: (ha-931571-m03)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/ha-931571-m03.rawdisk'/>
	I1104 10:54:15.080134   37715 main.go:141] libmachine: (ha-931571-m03)       <target dev='hda' bus='virtio'/>
	I1104 10:54:15.080145   37715 main.go:141] libmachine: (ha-931571-m03)     </disk>
	I1104 10:54:15.080149   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080154   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='mk-ha-931571'/>
	I1104 10:54:15.080163   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080168   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080172   37715 main.go:141] libmachine: (ha-931571-m03)     <interface type='network'>
	I1104 10:54:15.080177   37715 main.go:141] libmachine: (ha-931571-m03)       <source network='default'/>
	I1104 10:54:15.080181   37715 main.go:141] libmachine: (ha-931571-m03)       <model type='virtio'/>
	I1104 10:54:15.080186   37715 main.go:141] libmachine: (ha-931571-m03)     </interface>
	I1104 10:54:15.080191   37715 main.go:141] libmachine: (ha-931571-m03)     <serial type='pty'>
	I1104 10:54:15.080196   37715 main.go:141] libmachine: (ha-931571-m03)       <target port='0'/>
	I1104 10:54:15.080200   37715 main.go:141] libmachine: (ha-931571-m03)     </serial>
	I1104 10:54:15.080205   37715 main.go:141] libmachine: (ha-931571-m03)     <console type='pty'>
	I1104 10:54:15.080209   37715 main.go:141] libmachine: (ha-931571-m03)       <target type='serial' port='0'/>
	I1104 10:54:15.080214   37715 main.go:141] libmachine: (ha-931571-m03)     </console>
	I1104 10:54:15.080218   37715 main.go:141] libmachine: (ha-931571-m03)     <rng model='virtio'>
	I1104 10:54:15.080224   37715 main.go:141] libmachine: (ha-931571-m03)       <backend model='random'>/dev/random</backend>
	I1104 10:54:15.080230   37715 main.go:141] libmachine: (ha-931571-m03)     </rng>
	I1104 10:54:15.080236   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080243   37715 main.go:141] libmachine: (ha-931571-m03)     
	I1104 10:54:15.080248   37715 main.go:141] libmachine: (ha-931571-m03)   </devices>
	I1104 10:54:15.080254   37715 main.go:141] libmachine: (ha-931571-m03) </domain>
	I1104 10:54:15.080261   37715 main.go:141] libmachine: (ha-931571-m03) 
	I1104 10:54:15.087034   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:1d:68:f5 in network default
	I1104 10:54:15.087544   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring networks are active...
	I1104 10:54:15.087568   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:15.088354   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network default is active
	I1104 10:54:15.088653   37715 main.go:141] libmachine: (ha-931571-m03) Ensuring network mk-ha-931571 is active
	I1104 10:54:15.089053   37715 main.go:141] libmachine: (ha-931571-m03) Getting domain xml...
	I1104 10:54:15.089835   37715 main.go:141] libmachine: (ha-931571-m03) Creating domain...
	I1104 10:54:16.314267   37715 main.go:141] libmachine: (ha-931571-m03) Waiting to get IP...
	I1104 10:54:16.315295   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.315802   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.315837   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.315784   38465 retry.go:31] will retry after 211.49676ms: waiting for machine to come up
	I1104 10:54:16.528417   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.528897   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.528927   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.528846   38465 retry.go:31] will retry after 340.441068ms: waiting for machine to come up
	I1104 10:54:16.871525   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:16.871971   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:16.871997   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:16.871910   38465 retry.go:31] will retry after 446.439393ms: waiting for machine to come up
	I1104 10:54:17.319543   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.320106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.320137   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.320042   38465 retry.go:31] will retry after 381.839641ms: waiting for machine to come up
	I1104 10:54:17.703288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:17.703811   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:17.703840   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:17.703750   38465 retry.go:31] will retry after 593.813893ms: waiting for machine to come up
	I1104 10:54:18.299510   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:18.300023   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:18.300055   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:18.299939   38465 retry.go:31] will retry after 849.789348ms: waiting for machine to come up
	I1104 10:54:19.151490   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:19.151964   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:19.151988   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:19.151922   38465 retry.go:31] will retry after 1.150337712s: waiting for machine to come up
	I1104 10:54:20.303915   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:20.304325   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:20.304357   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:20.304278   38465 retry.go:31] will retry after 1.472559033s: waiting for machine to come up
	I1104 10:54:21.778305   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:21.778784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:21.778810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:21.778723   38465 retry.go:31] will retry after 1.37004444s: waiting for machine to come up
	I1104 10:54:23.150404   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:23.150868   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:23.150895   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:23.150820   38465 retry.go:31] will retry after 1.893583796s: waiting for machine to come up
	I1104 10:54:25.045832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:25.046288   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:25.046327   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:25.046279   38465 retry.go:31] will retry after 2.056345872s: waiting for machine to come up
	I1104 10:54:27.105382   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:27.105822   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:27.105853   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:27.105789   38465 retry.go:31] will retry after 3.414780128s: waiting for machine to come up
	I1104 10:54:30.521832   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:30.522159   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:30.522181   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:30.522080   38465 retry.go:31] will retry after 3.340201347s: waiting for machine to come up
	I1104 10:54:33.865562   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:33.865973   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find current IP address of domain ha-931571-m03 in network mk-ha-931571
	I1104 10:54:33.866003   37715 main.go:141] libmachine: (ha-931571-m03) DBG | I1104 10:54:33.865938   38465 retry.go:31] will retry after 5.278208954s: waiting for machine to come up
	I1104 10:54:39.149712   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150250   37715 main.go:141] libmachine: (ha-931571-m03) Found IP for machine: 192.168.39.57
	I1104 10:54:39.150283   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has current primary IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.150292   37715 main.go:141] libmachine: (ha-931571-m03) Reserving static IP address...
	I1104 10:54:39.150676   37715 main.go:141] libmachine: (ha-931571-m03) DBG | unable to find host DHCP lease matching {name: "ha-931571-m03", mac: "52:54:00:30:f5:de", ip: "192.168.39.57"} in network mk-ha-931571
	I1104 10:54:39.223412   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Getting to WaitForSSH function...
	I1104 10:54:39.223438   37715 main.go:141] libmachine: (ha-931571-m03) Reserved static IP address: 192.168.39.57
	I1104 10:54:39.223450   37715 main.go:141] libmachine: (ha-931571-m03) Waiting for SSH to be available...
	I1104 10:54:39.226810   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227204   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.227229   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.227416   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH client type: external
	I1104 10:54:39.227440   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa (-rw-------)
	I1104 10:54:39.227467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 10:54:39.227480   37715 main.go:141] libmachine: (ha-931571-m03) DBG | About to run SSH command:
	I1104 10:54:39.227493   37715 main.go:141] libmachine: (ha-931571-m03) DBG | exit 0
	I1104 10:54:39.348849   37715 main.go:141] libmachine: (ha-931571-m03) DBG | SSH cmd err, output: <nil>: 
	I1104 10:54:39.349130   37715 main.go:141] libmachine: (ha-931571-m03) KVM machine creation complete!
	I1104 10:54:39.349458   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:39.350011   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350175   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:39.350318   37715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 10:54:39.350330   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 10:54:39.351463   37715 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 10:54:39.351478   37715 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 10:54:39.351482   37715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 10:54:39.351487   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.353807   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354106   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.354143   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.354349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.354557   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354742   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.354871   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.355021   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.355223   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.355234   37715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 10:54:39.452207   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.452228   37715 main.go:141] libmachine: Detecting the provisioner...
	I1104 10:54:39.452237   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.455314   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.455778   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.455805   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.456043   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.456250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456440   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.456603   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.456750   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.456931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.456953   37715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 10:54:39.553854   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 10:54:39.553946   37715 main.go:141] libmachine: found compatible host: buildroot
	I1104 10:54:39.553963   37715 main.go:141] libmachine: Provisioning with buildroot...
	I1104 10:54:39.553975   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554231   37715 buildroot.go:166] provisioning hostname "ha-931571-m03"
	I1104 10:54:39.554253   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.554456   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.556992   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557348   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.557377   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.557532   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.557736   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.557887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.558007   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.558172   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.558399   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.558418   37715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571-m03 && echo "ha-931571-m03" | sudo tee /etc/hostname
	I1104 10:54:39.670668   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571-m03
	
	I1104 10:54:39.670701   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.674148   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674467   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.674492   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.674738   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.674887   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675053   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.675250   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.675459   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:39.675678   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:39.675703   37715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 10:54:39.782022   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 10:54:39.782049   37715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 10:54:39.782068   37715 buildroot.go:174] setting up certificates
	I1104 10:54:39.782080   37715 provision.go:84] configureAuth start
	I1104 10:54:39.782091   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetMachineName
	I1104 10:54:39.782349   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:39.785051   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785459   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.785488   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.785656   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.787833   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788124   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.788141   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.788305   37715 provision.go:143] copyHostCerts
	I1104 10:54:39.788334   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788369   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 10:54:39.788378   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 10:54:39.788442   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 10:54:39.788557   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788577   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 10:54:39.788584   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 10:54:39.788610   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 10:54:39.788656   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788673   37715 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 10:54:39.788679   37715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 10:54:39.788700   37715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 10:54:39.788771   37715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571-m03 san=[127.0.0.1 192.168.39.57 ha-931571-m03 localhost minikube]
	I1104 10:54:39.906066   37715 provision.go:177] copyRemoteCerts
	I1104 10:54:39.906121   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 10:54:39.906156   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:39.909171   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909602   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:39.909633   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:39.909904   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:39.910114   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:39.910451   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:39.910562   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:39.986932   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 10:54:39.986995   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 10:54:40.011798   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 10:54:40.011899   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1104 10:54:40.035728   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 10:54:40.035811   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 10:54:40.058737   37715 provision.go:87] duration metric: took 276.643486ms to configureAuth
	I1104 10:54:40.058767   37715 buildroot.go:189] setting minikube options for container-runtime
	I1104 10:54:40.058982   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:40.059060   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.061592   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.061918   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.061947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.062136   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.062313   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062493   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.062627   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.062779   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.062931   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.062946   37715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 10:54:40.285341   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 10:54:40.285362   37715 main.go:141] libmachine: Checking connection to Docker...
	I1104 10:54:40.285369   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetURL
	I1104 10:54:40.286607   37715 main.go:141] libmachine: (ha-931571-m03) DBG | Using libvirt version 6000000
	I1104 10:54:40.288784   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289099   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.289130   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.289303   37715 main.go:141] libmachine: Docker is up and running!
	I1104 10:54:40.289319   37715 main.go:141] libmachine: Reticulating splines...
	I1104 10:54:40.289326   37715 client.go:171] duration metric: took 25.618399312s to LocalClient.Create
	I1104 10:54:40.289350   37715 start.go:167] duration metric: took 25.618478892s to libmachine.API.Create "ha-931571"
	I1104 10:54:40.289362   37715 start.go:293] postStartSetup for "ha-931571-m03" (driver="kvm2")
	I1104 10:54:40.289391   37715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 10:54:40.289407   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.289628   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 10:54:40.289653   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.291922   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292338   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.292358   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.292590   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.292774   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.292922   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.293081   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.371198   37715 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 10:54:40.375533   37715 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 10:54:40.375563   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 10:54:40.375682   37715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 10:54:40.375780   37715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 10:54:40.375790   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 10:54:40.375871   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 10:54:40.385684   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:40.408674   37715 start.go:296] duration metric: took 119.284792ms for postStartSetup
	I1104 10:54:40.408723   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetConfigRaw
	I1104 10:54:40.409449   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.412211   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412561   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.412589   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.412888   37715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:54:40.413122   37715 start.go:128] duration metric: took 25.760559258s to createHost
	I1104 10:54:40.413150   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.415473   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415825   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.415846   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.415970   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.416207   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416371   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.416538   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.416702   37715 main.go:141] libmachine: Using SSH client type: native
	I1104 10:54:40.416875   37715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1104 10:54:40.416888   37715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 10:54:40.513907   37715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730717680.493900775
	
	I1104 10:54:40.513930   37715 fix.go:216] guest clock: 1730717680.493900775
	I1104 10:54:40.513937   37715 fix.go:229] Guest: 2024-11-04 10:54:40.493900775 +0000 UTC Remote: 2024-11-04 10:54:40.413138421 +0000 UTC m=+139.084656658 (delta=80.762354ms)
	I1104 10:54:40.513952   37715 fix.go:200] guest clock delta is within tolerance: 80.762354ms
	I1104 10:54:40.513957   37715 start.go:83] releasing machines lock for "ha-931571-m03", held for 25.861527752s
	I1104 10:54:40.513977   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.514219   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:40.516861   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.517293   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.517318   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.519824   37715 out.go:177] * Found network options:
	I1104 10:54:40.521282   37715 out.go:177]   - NO_PROXY=192.168.39.67,192.168.39.245
	W1104 10:54:40.522546   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.522569   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.522586   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523386   37715 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:54:40.523502   37715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 10:54:40.523543   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	W1104 10:54:40.523621   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	W1104 10:54:40.523648   37715 proxy.go:119] fail to check proxy env: Error ip not in block
	I1104 10:54:40.523705   37715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 10:54:40.523726   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:54:40.526526   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526600   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526878   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526907   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.526933   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:40.526947   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:40.527005   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527178   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527307   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:54:40.527380   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527467   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:54:40.527533   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.527573   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:54:40.527722   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:54:40.761284   37715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 10:54:40.766951   37715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 10:54:40.767028   37715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 10:54:40.784061   37715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 10:54:40.784083   37715 start.go:495] detecting cgroup driver to use...
	I1104 10:54:40.784139   37715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 10:54:40.799767   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 10:54:40.814033   37715 docker.go:217] disabling cri-docker service (if available) ...
	I1104 10:54:40.814100   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 10:54:40.828095   37715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 10:54:40.843053   37715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 10:54:40.959422   37715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 10:54:41.119792   37715 docker.go:233] disabling docker service ...
	I1104 10:54:41.119859   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 10:54:41.134123   37715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 10:54:41.147262   37715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 10:54:41.281486   37715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 10:54:41.401330   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 10:54:41.415018   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 10:54:41.433640   37715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 10:54:41.433713   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.444506   37715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 10:54:41.444582   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.456767   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.467306   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.477809   37715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 10:54:41.488160   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.498689   37715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.515679   37715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 10:54:41.526763   37715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 10:54:41.536412   37715 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 10:54:41.536469   37715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 10:54:41.549448   37715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 10:54:41.559807   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:41.665655   37715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 10:54:41.758091   37715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 10:54:41.758187   37715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 10:54:41.762517   37715 start.go:563] Will wait 60s for crictl version
	I1104 10:54:41.762572   37715 ssh_runner.go:195] Run: which crictl
	I1104 10:54:41.766429   37715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 10:54:41.804303   37715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 10:54:41.804420   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.830473   37715 ssh_runner.go:195] Run: crio --version
	I1104 10:54:41.860302   37715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 10:54:41.861621   37715 out.go:177]   - env NO_PROXY=192.168.39.67
	I1104 10:54:41.863004   37715 out.go:177]   - env NO_PROXY=192.168.39.67,192.168.39.245
	I1104 10:54:41.864263   37715 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 10:54:41.867052   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867423   37715 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:54:41.867446   37715 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:54:41.867651   37715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 10:54:41.871716   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:41.884015   37715 mustload.go:65] Loading cluster: ha-931571
	I1104 10:54:41.884230   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:54:41.884480   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.884518   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.900117   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I1104 10:54:41.900610   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.901163   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.901184   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.901516   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.901701   37715 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 10:54:41.903124   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:41.903396   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:41.903433   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:41.918029   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I1104 10:54:41.918566   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:41.919028   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:41.919050   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:41.919333   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:41.919520   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:41.919673   37715 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.57
	I1104 10:54:41.919684   37715 certs.go:194] generating shared ca certs ...
	I1104 10:54:41.919697   37715 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:41.919810   37715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 10:54:41.919845   37715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 10:54:41.919854   37715 certs.go:256] generating profile certs ...
	I1104 10:54:41.919922   37715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 10:54:41.919946   37715 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd
	I1104 10:54:41.919960   37715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 10:54:42.049039   37715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd ...
	I1104 10:54:42.049068   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd: {Name:mk425b204dd51c6129591dbbf4cda0b66e34eb56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049239   37715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd ...
	I1104 10:54:42.049250   37715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd: {Name:mk1230635dbd65cb8c7d025a3549f17dc35e060e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 10:54:42.049322   37715 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 10:54:42.049449   37715 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.a50c38dd -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 10:54:42.049564   37715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 10:54:42.049580   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 10:54:42.049595   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 10:54:42.049608   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 10:54:42.049621   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 10:54:42.049634   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 10:54:42.049647   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 10:54:42.049657   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 10:54:42.049669   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 10:54:42.049713   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 10:54:42.049741   37715 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 10:54:42.049750   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 10:54:42.049771   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 10:54:42.049799   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 10:54:42.049819   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 10:54:42.049855   37715 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 10:54:42.049880   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.049893   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.049905   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.049934   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:42.052637   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053074   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:42.053102   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:42.053289   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:42.053475   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:42.053607   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:42.053769   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:42.125617   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1104 10:54:42.129901   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1104 10:54:42.141111   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1104 10:54:42.145054   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1104 10:54:42.154954   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1104 10:54:42.158822   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1104 10:54:42.168976   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1104 10:54:42.172887   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1104 10:54:42.182649   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1104 10:54:42.186455   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1104 10:54:42.196466   37715 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1104 10:54:42.200376   37715 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1104 10:54:42.211239   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 10:54:42.236618   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 10:54:42.260726   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 10:54:42.283147   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 10:54:42.305271   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1104 10:54:42.327703   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 10:54:42.350340   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 10:54:42.372114   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 10:54:42.394125   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 10:54:42.415761   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 10:54:42.437284   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 10:54:42.458545   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1104 10:54:42.474091   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1104 10:54:42.489871   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1104 10:54:42.505378   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1104 10:54:42.521116   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1104 10:54:42.537323   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1104 10:54:42.553306   37715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1104 10:54:42.569157   37715 ssh_runner.go:195] Run: openssl version
	I1104 10:54:42.574422   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 10:54:42.584560   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588538   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.588592   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 10:54:42.594056   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 10:54:42.604559   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 10:54:42.615717   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619821   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.619868   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 10:54:42.625153   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 10:54:42.638993   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 10:54:42.649427   37715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653431   37715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.653483   37715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 10:54:42.658834   37715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 10:54:42.670960   37715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 10:54:42.675173   37715 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 10:54:42.675237   37715 kubeadm.go:934] updating node {m03 192.168.39.57 8443 v1.31.2 crio true true} ...
	I1104 10:54:42.675332   37715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 10:54:42.675370   37715 kube-vip.go:115] generating kube-vip config ...
	I1104 10:54:42.675419   37715 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 10:54:42.692549   37715 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 10:54:42.692627   37715 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 10:54:42.692680   37715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.702705   37715 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1104 10:54:42.702768   37715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1104 10:54:42.712640   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1104 10:54:42.712662   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712660   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1104 10:54:42.712682   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712648   37715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1104 10:54:42.712715   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1104 10:54:42.712727   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1104 10:54:42.712752   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:54:42.718694   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1104 10:54:42.718732   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1104 10:54:42.746213   37715 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.746221   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1104 10:54:42.746258   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1104 10:54:42.746334   37715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1104 10:54:42.789088   37715 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1104 10:54:42.789130   37715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1104 10:54:43.556894   37715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1104 10:54:43.566649   37715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1104 10:54:43.583297   37715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 10:54:43.599783   37715 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 10:54:43.615935   37715 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 10:54:43.619736   37715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 10:54:43.632102   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:54:43.769468   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:54:43.787176   37715 host.go:66] Checking if "ha-931571" exists ...
	I1104 10:54:43.787522   37715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:54:43.787559   37715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:54:43.803438   37715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I1104 10:54:43.803811   37715 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:54:43.804247   37715 main.go:141] libmachine: Using API Version  1
	I1104 10:54:43.804266   37715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:54:43.804582   37715 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:54:43.804752   37715 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 10:54:43.804873   37715 start.go:317] joinCluster: &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:54:43.805017   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1104 10:54:43.805035   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 10:54:43.808407   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808840   37715 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 10:54:43.808868   37715 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 10:54:43.808996   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 10:54:43.809168   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 10:54:43.809326   37715 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 10:54:43.809457   37715 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 10:54:43.953404   37715 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:54:43.953450   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443"
	I1104 10:55:05.442467   37715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cjywwd.x031qjjoquz98pue --discovery-token-ca-cert-hash sha256:95833ff41306ac800b975e28f2da3aedc13f83db6865dfb0ce3f10d781d75fd9 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-931571-m03 --control-plane --apiserver-advertise-address=192.168.39.57 --apiserver-bind-port=8443": (21.488974658s)
	I1104 10:55:05.442503   37715 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1104 10:55:05.990844   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-931571-m03 minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4 minikube.k8s.io/name=ha-931571 minikube.k8s.io/primary=false
	I1104 10:55:06.139537   37715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-931571-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1104 10:55:06.285616   37715 start.go:319] duration metric: took 22.480737326s to joinCluster
	I1104 10:55:06.285694   37715 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 10:55:06.286003   37715 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:55:06.288554   37715 out.go:177] * Verifying Kubernetes components...
	I1104 10:55:06.289975   37715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 10:55:06.546650   37715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 10:55:06.605631   37715 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:55:06.605981   37715 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1104 10:55:06.606063   37715 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I1104 10:55:06.606329   37715 node_ready.go:35] waiting up to 6m0s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:06.606418   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:06.606434   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:06.606445   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:06.606456   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:06.609914   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.107514   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.107534   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.111083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:07.606560   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:07.606587   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:07.606600   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:07.606605   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:07.613411   37715 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1104 10:55:08.107538   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.107560   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.107567   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.107570   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.606539   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:08.606559   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:08.606567   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:08.606571   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:08.609675   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:08.610356   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:09.106606   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.106630   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.106639   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.106644   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.109657   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:09.607102   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:09.607123   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:09.607131   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:09.607135   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:09.610601   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.106839   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.106861   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.106872   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.106887   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.110421   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.607151   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:10.607178   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:10.607190   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:10.607195   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:10.610313   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:10.611052   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:11.107465   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.107489   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.107500   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.107505   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.134933   37715 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1104 10:55:11.607114   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:11.607137   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:11.607145   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:11.607149   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:11.610404   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.107512   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.107532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.107542   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.107546   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.110694   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:12.606667   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:12.606689   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:12.606701   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:12.606705   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:12.609952   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.106734   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.106769   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.106780   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.106786   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.110063   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:13.110550   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:13.607192   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:13.607222   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:13.607237   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:13.607241   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:13.610250   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:14.106526   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.106548   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.106556   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.106560   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.110076   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:14.606584   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:14.606604   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:14.606612   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:14.606622   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:14.609643   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.106797   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.106819   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.106826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.106830   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.110526   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:15.111303   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:15.606581   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:15.606631   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:15.606643   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:15.606648   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:15.609879   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.107000   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.107025   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.107036   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.107042   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.110279   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:16.607359   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:16.607381   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:16.607391   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:16.607398   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:16.610655   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.106684   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.106706   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.106716   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.106722   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.109976   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.607162   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:17.607182   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:17.607190   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:17.607194   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:17.610739   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:17.611443   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:18.106827   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.106850   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.106858   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.106862   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.110271   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:18.607389   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:18.607411   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:18.607419   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:18.607422   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:18.612587   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:19.106763   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.106784   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.106791   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.106795   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.110156   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:19.607506   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:19.607532   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:19.607540   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:19.607545   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:19.611651   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:19.612446   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:20.107336   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.107356   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.107364   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.107368   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.110541   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:20.607455   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:20.607477   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:20.607485   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:20.607488   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:20.610742   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:21.106794   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.106815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.106823   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.106827   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.109773   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:21.607002   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:21.607022   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:21.607030   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:21.607033   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:21.609863   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:22.106940   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.106962   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.106970   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.106981   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.110219   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:22.110873   37715 node_ready.go:53] node "ha-931571-m03" has status "Ready":"False"
	I1104 10:55:22.607233   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:22.607256   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:22.607267   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:22.607272   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:22.610320   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.107234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.107261   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.107272   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.107278   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.110559   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.607522   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:23.607544   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.607552   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.607557   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.610843   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.611437   37715 node_ready.go:49] node "ha-931571-m03" has status "Ready":"True"
	I1104 10:55:23.611454   37715 node_ready.go:38] duration metric: took 17.005106707s for node "ha-931571-m03" to be "Ready" ...
	I1104 10:55:23.611469   37715 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:23.611529   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:23.611538   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.611545   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.611550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.616487   37715 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1104 10:55:23.623329   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.623422   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ss4v
	I1104 10:55:23.623428   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.623436   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.623440   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.626812   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:23.627478   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.627500   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.627509   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.627513   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.630024   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.630705   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.630725   37715 pod_ready.go:82] duration metric: took 7.365313ms for pod "coredns-7c65d6cfc9-5ss4v" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630737   37715 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.630804   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9wb4
	I1104 10:55:23.630815   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.630826   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.630835   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.633089   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.633668   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.633688   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.633703   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.633714   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.635922   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.636490   37715 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.636510   37715 pod_ready.go:82] duration metric: took 5.760939ms for pod "coredns-7c65d6cfc9-s9wb4" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636522   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.636583   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571
	I1104 10:55:23.636592   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.636602   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.636610   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.639359   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.639900   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:23.639915   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.639922   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.639925   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.642474   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.642946   37715 pod_ready.go:93] pod "etcd-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.642963   37715 pod_ready.go:82] duration metric: took 6.432226ms for pod "etcd-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.642971   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.643028   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m02
	I1104 10:55:23.643036   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.643043   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.643047   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.645331   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.646060   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:23.646073   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.646080   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.646084   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.648315   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:23.648847   37715 pod_ready.go:93] pod "etcd-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:23.648862   37715 pod_ready.go:82] duration metric: took 5.88444ms for pod "etcd-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.648869   37715 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:23.808246   37715 request.go:632] Waited for 159.312664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808304   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-931571-m03
	I1104 10:55:23.808309   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:23.808316   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:23.808320   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:23.811540   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.007952   37715 request.go:632] Waited for 195.768208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008033   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:24.008045   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.008056   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.008066   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.011083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.011703   37715 pod_ready.go:93] pod "etcd-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.011724   37715 pod_ready.go:82] duration metric: took 362.848542ms for pod "etcd-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.011739   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.207843   37715 request.go:632] Waited for 196.043868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207918   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571
	I1104 10:55:24.207925   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.207937   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.207947   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.211127   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.408352   37715 request.go:632] Waited for 196.308065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408442   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:24.408450   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.408460   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.408469   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.411644   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.412279   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.412297   37715 pod_ready.go:82] duration metric: took 400.550124ms for pod "kube-apiserver-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.412310   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.608501   37715 request.go:632] Waited for 196.123497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608572   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m02
	I1104 10:55:24.608580   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.608590   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.608596   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.612062   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.808253   37715 request.go:632] Waited for 195.326237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808332   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:24.808343   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:24.808352   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:24.808358   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:24.811435   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:24.811848   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:24.811868   37715 pod_ready.go:82] duration metric: took 399.549963ms for pod "kube-apiserver-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:24.811877   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.008126   37715 request.go:632] Waited for 196.158524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008216   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-931571-m03
	I1104 10:55:25.008224   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.008232   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.008237   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.011898   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.207886   37715 request.go:632] Waited for 195.224715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207967   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:25.207975   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.207983   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.207987   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.211174   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.211794   37715 pod_ready.go:93] pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.211815   37715 pod_ready.go:82] duration metric: took 399.930178ms for pod "kube-apiserver-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.211828   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.407990   37715 request.go:632] Waited for 196.084804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408049   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571
	I1104 10:55:25.408054   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.408062   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.408065   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.411212   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.608267   37715 request.go:632] Waited for 196.399136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608341   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:25.608348   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.608358   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.608363   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.611599   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:25.612277   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:25.612297   37715 pod_ready.go:82] duration metric: took 400.459599ms for pod "kube-controller-manager-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.612307   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:25.808295   37715 request.go:632] Waited for 195.907201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808358   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m02
	I1104 10:55:25.808364   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:25.808371   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:25.808379   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:25.811856   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.007942   37715 request.go:632] Waited for 195.386929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008009   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:26.008020   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.008034   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.008043   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.010794   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.011251   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.011269   37715 pod_ready.go:82] duration metric: took 398.955793ms for pod "kube-controller-manager-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.011279   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.207834   37715 request.go:632] Waited for 196.482261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207909   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-931571-m03
	I1104 10:55:26.207922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.207934   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.207939   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.211083   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.407914   37715 request.go:632] Waited for 196.093119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407994   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:26.407999   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.408006   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.408012   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.411522   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.412011   37715 pod_ready.go:93] pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.412034   37715 pod_ready.go:82] duration metric: took 400.747328ms for pod "kube-controller-manager-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.412048   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.608324   37715 request.go:632] Waited for 196.200888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608407   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvk6r
	I1104 10:55:26.608414   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.608430   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.608437   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.611990   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:26.808246   37715 request.go:632] Waited for 195.355588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808295   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:26.808300   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:26.808308   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:26.808311   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:26.811118   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:26.811682   37715 pod_ready.go:93] pod "kube-proxy-bvk6r" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:26.811705   37715 pod_ready.go:82] duration metric: took 399.648214ms for pod "kube-proxy-bvk6r" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:26.811718   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.008596   37715 request.go:632] Waited for 196.775543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ttq4z
	I1104 10:55:27.008677   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.008685   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.008691   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.012209   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.208175   37715 request.go:632] Waited for 195.363562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208234   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:27.208240   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.208247   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.208250   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.211552   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.212061   37715 pod_ready.go:93] pod "kube-proxy-ttq4z" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.212084   37715 pod_ready.go:82] duration metric: took 400.357853ms for pod "kube-proxy-ttq4z" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.212098   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.408120   37715 request.go:632] Waited for 195.934645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408175   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wz92s
	I1104 10:55:27.408180   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.408188   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.408194   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.411594   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.607502   37715 request.go:632] Waited for 195.309631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607589   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:27.607599   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.607611   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.607621   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.610707   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:27.611551   37715 pod_ready.go:93] pod "kube-proxy-wz92s" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:27.611571   37715 pod_ready.go:82] duration metric: took 399.465223ms for pod "kube-proxy-wz92s" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.611584   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:27.807587   37715 request.go:632] Waited for 195.935372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807677   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571
	I1104 10:55:27.807686   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:27.807694   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:27.807697   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:27.810852   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.007894   37715 request.go:632] Waited for 196.377136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007943   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571
	I1104 10:55:28.007948   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.007955   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.007959   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.010780   37715 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1104 10:55:28.011225   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.011242   37715 pod_ready.go:82] duration metric: took 399.65101ms for pod "kube-scheduler-ha-931571" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.011252   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.208327   37715 request.go:632] Waited for 197.007106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208398   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m02
	I1104 10:55:28.208406   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.208412   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.208417   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.211868   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.407823   37715 request.go:632] Waited for 195.386338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407915   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m02
	I1104 10:55:28.407922   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.407929   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.407936   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.411100   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.411750   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.411766   37715 pod_ready.go:82] duration metric: took 400.505326ms for pod "kube-scheduler-ha-931571-m02" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.411776   37715 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.607873   37715 request.go:632] Waited for 196.030747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607978   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-931571-m03
	I1104 10:55:28.607989   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.607996   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.607999   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.611695   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.807696   37715 request.go:632] Waited for 195.284295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807770   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-931571-m03
	I1104 10:55:28.807776   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.807783   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.807788   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.811278   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:28.812008   37715 pod_ready.go:93] pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace has status "Ready":"True"
	I1104 10:55:28.812025   37715 pod_ready.go:82] duration metric: took 400.242831ms for pod "kube-scheduler-ha-931571-m03" in "kube-system" namespace to be "Ready" ...
	I1104 10:55:28.812037   37715 pod_ready.go:39] duration metric: took 5.200555034s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 10:55:28.812050   37715 api_server.go:52] waiting for apiserver process to appear ...
	I1104 10:55:28.812101   37715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 10:55:28.825529   37715 api_server.go:72] duration metric: took 22.539799278s to wait for apiserver process to appear ...
	I1104 10:55:28.825558   37715 api_server.go:88] waiting for apiserver healthz status ...
	I1104 10:55:28.825578   37715 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 10:55:28.829724   37715 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 10:55:28.829787   37715 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I1104 10:55:28.829795   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:28.829803   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:28.829807   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:28.830888   37715 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1104 10:55:28.830964   37715 api_server.go:141] control plane version: v1.31.2
	I1104 10:55:28.830984   37715 api_server.go:131] duration metric: took 5.41894ms to wait for apiserver health ...
	I1104 10:55:28.830996   37715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 10:55:29.008134   37715 request.go:632] Waited for 177.060621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008207   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.008237   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.008252   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.008298   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.014200   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.021556   37715 system_pods.go:59] 24 kube-system pods found
	I1104 10:55:29.021592   37715 system_pods.go:61] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.021600   37715 system_pods.go:61] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.021611   37715 system_pods.go:61] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.021616   37715 system_pods.go:61] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.021627   37715 system_pods.go:61] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.021633   37715 system_pods.go:61] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.021643   37715 system_pods.go:61] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.021649   37715 system_pods.go:61] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.021653   37715 system_pods.go:61] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.021658   37715 system_pods.go:61] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.021673   37715 system_pods.go:61] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.021679   37715 system_pods.go:61] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.021684   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.021689   37715 system_pods.go:61] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.021694   37715 system_pods.go:61] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.021703   37715 system_pods.go:61] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.021708   37715 system_pods.go:61] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.021714   37715 system_pods.go:61] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.021718   37715 system_pods.go:61] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.021723   37715 system_pods.go:61] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.021739   37715 system_pods.go:61] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021748   37715 system_pods.go:61] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021757   37715 system_pods.go:61] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.021768   37715 system_pods.go:61] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.021776   37715 system_pods.go:74] duration metric: took 190.77233ms to wait for pod list to return data ...
	I1104 10:55:29.021785   37715 default_sa.go:34] waiting for default service account to be created ...
	I1104 10:55:29.207606   37715 request.go:632] Waited for 185.728415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207670   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I1104 10:55:29.207676   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.207686   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.207695   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.218692   37715 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1104 10:55:29.218828   37715 default_sa.go:45] found service account: "default"
	I1104 10:55:29.218847   37715 default_sa.go:55] duration metric: took 197.054864ms for default service account to be created ...
	I1104 10:55:29.218857   37715 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 10:55:29.408474   37715 request.go:632] Waited for 189.535523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408534   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I1104 10:55:29.408539   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.408546   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.408550   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.414296   37715 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1104 10:55:29.422499   37715 system_pods.go:86] 24 kube-system pods found
	I1104 10:55:29.422532   37715 system_pods.go:89] "coredns-7c65d6cfc9-5ss4v" [b1994bcf-ce9e-4a5e-90e0-5f3e284218f4] Running
	I1104 10:55:29.422537   37715 system_pods.go:89] "coredns-7c65d6cfc9-s9wb4" [fd497087-82a1-4173-a1ca-87f47225cd80] Running
	I1104 10:55:29.422541   37715 system_pods.go:89] "etcd-ha-931571" [fdadf64d-457c-4f54-8824-770c47938a4d] Running
	I1104 10:55:29.422545   37715 system_pods.go:89] "etcd-ha-931571-m02" [b40b2a26-19b6-47f9-af25-dcbffbe55156] Running
	I1104 10:55:29.422549   37715 system_pods.go:89] "etcd-ha-931571-m03" [8bda5677-cbd9-4c5c-9a71-4d7d4ca3796b] Running
	I1104 10:55:29.422553   37715 system_pods.go:89] "kindnet-2n2ws" [f43095ed-404a-4c99-a271-a8c7fb6a3559] Running
	I1104 10:55:29.422557   37715 system_pods.go:89] "kindnet-bg4z6" [43eed78a-1357-4607-bff5-a1c896da4af2] Running
	I1104 10:55:29.422560   37715 system_pods.go:89] "kindnet-w2jwt" [be594a41-9200-4e2b-a8df-057c381bc0f7] Running
	I1104 10:55:29.422563   37715 system_pods.go:89] "kube-apiserver-ha-931571" [2ba59318-d54d-4948-8133-2ff2afa001e5] Running
	I1104 10:55:29.422567   37715 system_pods.go:89] "kube-apiserver-ha-931571-m02" [6a6bfd7d-cec1-4e07-90bf-c933f871eef1] Running
	I1104 10:55:29.422571   37715 system_pods.go:89] "kube-apiserver-ha-931571-m03" [cc3a9082-873f-4426-98a3-5fcafd0ecc49] Running
	I1104 10:55:29.422576   37715 system_pods.go:89] "kube-controller-manager-ha-931571" [62d03af1-aa91-4ebf-af21-19f760956cf5] Running
	I1104 10:55:29.422582   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m02" [96d65b2a-66c8-411a-bb4b-5ff222b7832d] Running
	I1104 10:55:29.422588   37715 system_pods.go:89] "kube-controller-manager-ha-931571-m03" [a52ddcf8-6212-4701-823d-5d88f1291d38] Running
	I1104 10:55:29.422593   37715 system_pods.go:89] "kube-proxy-bvk6r" [5f293726-a3a3-4398-9b70-ca8f83c66d7c] Running
	I1104 10:55:29.422598   37715 system_pods.go:89] "kube-proxy-ttq4z" [115ca0e9-7fd8-4cbc-8f2a-ec4edfea2b2b] Running
	I1104 10:55:29.422604   37715 system_pods.go:89] "kube-proxy-wz92s" [a2e065c2-9645-44e4-b4e8-dc787b0c6662] Running
	I1104 10:55:29.422614   37715 system_pods.go:89] "kube-scheduler-ha-931571" [8bc3d9c3-2b41-4f54-a511-34939218fa5b] Running
	I1104 10:55:29.422621   37715 system_pods.go:89] "kube-scheduler-ha-931571-m02" [4329adba-71fa-425a-b379-6e52af90b458] Running
	I1104 10:55:29.422624   37715 system_pods.go:89] "kube-scheduler-ha-931571-m03" [db854b86-c89b-43a8-b3c4-e1cca5033fca] Running
	I1104 10:55:29.422633   37715 system_pods.go:89] "kube-vip-ha-931571" [f9948426-2770-47cf-b610-ecfea5b17be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422642   37715 system_pods.go:89] "kube-vip-ha-931571-m02" [860a8a9e-b839-4c23-80b5-415a62fca083] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422650   37715 system_pods.go:89] "kube-vip-ha-931571-m03" [cca6009a-1a2e-418c-8507-ced1c3c73333] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I1104 10:55:29.422656   37715 system_pods.go:89] "storage-provisioner" [3eb09a1d-0033-428a-a305-aa2901b20566] Running
	I1104 10:55:29.422665   37715 system_pods.go:126] duration metric: took 203.801845ms to wait for k8s-apps to be running ...
	I1104 10:55:29.422676   37715 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 10:55:29.422727   37715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 10:55:29.439259   37715 system_svc.go:56] duration metric: took 16.56809ms WaitForService to wait for kubelet
	I1104 10:55:29.439296   37715 kubeadm.go:582] duration metric: took 23.153569026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 10:55:29.439318   37715 node_conditions.go:102] verifying NodePressure condition ...
	I1104 10:55:29.607660   37715 request.go:632] Waited for 168.244277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607713   37715 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I1104 10:55:29.607718   37715 round_trippers.go:469] Request Headers:
	I1104 10:55:29.607726   37715 round_trippers.go:473]     Accept: application/json, */*
	I1104 10:55:29.607732   37715 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 10:55:29.611371   37715 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1104 10:55:29.612755   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612781   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612794   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612800   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612807   37715 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 10:55:29.612811   37715 node_conditions.go:123] node cpu capacity is 2
	I1104 10:55:29.612817   37715 node_conditions.go:105] duration metric: took 173.492197ms to run NodePressure ...
	I1104 10:55:29.612832   37715 start.go:241] waiting for startup goroutines ...
	I1104 10:55:29.612860   37715 start.go:255] writing updated cluster config ...
	I1104 10:55:29.613201   37715 ssh_runner.go:195] Run: rm -f paused
	I1104 10:55:29.662232   37715 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 10:55:29.664453   37715 out.go:177] * Done! kubectl is now configured to use "ha-931571" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.785855941Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-nslmz,Uid:68017266-8187-488d-ab36-2a5af294fa2e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717730868539405,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T10:55:30.550992795Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-s9wb4,Uid:fd497087-82a1-4173-a1ca-87f47225cd80,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1730717598441806264,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T10:53:18.128934745Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3eb09a1d-0033-428a-a305-aa2901b20566,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717598440192100,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-11-04T10:53:18.126302034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5ss4v,Uid:b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1730717598418623588,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T10:53:18.111118903Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&PodSandboxMetadata{Name:kindnet-2n2ws,Uid:f43095ed-404a-4c99-a271-a8c7fb6a3559,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717583647207843,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-11-04T10:53:03.328072416Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&PodSandboxMetadata{Name:kube-proxy-bvk6r,Uid:5f293726-a3a3-4398-9b70-ca8f83c66d7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717583642840853,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T10:53:03.322492710Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-931571,Uid:04abf0ed929591b9a922eba9b45e06b4,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1730717572066348397,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 04abf0ed929591b9a922eba9b45e06b4,kubernetes.io/config.seen: 2024-11-04T10:52:51.586471280Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-931571,Uid:4685ec45b7a2365863fd185bc1066ff5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717572063556276,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff
5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4685ec45b7a2365863fd185bc1066ff5,kubernetes.io/config.seen: 2024-11-04T10:52:51.586470029Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-931571,Uid:488ad91ee064d442db18849afe83c778,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717572050986381,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.67:8443,kubernetes.io/config.hash: 488ad91ee064d442db18849afe83c778,kubernetes.io/config.seen: 2024-11-04T10:52:51.586465394Z,kubernetes.io/config.source: file,},RuntimeHandler:,}
,&PodSandbox{Id:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-931571,Uid:d7bfae2f58ae7de463dba4b274c633ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717572043120237,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{kubernetes.io/config.hash: d7bfae2f58ae7de463dba4b274c633ef,kubernetes.io/config.seen: 2024-11-04T10:52:51.586472105Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&PodSandboxMetadata{Name:etcd-ha-931571,Uid:bdade1472bd07799de85a7bf300c651f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730717572038397332,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-931571,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.67:2379,kubernetes.io/config.hash: bdade1472bd07799de85a7bf300c651f,kubernetes.io/config.seen: 2024-11-04T10:52:51.586473075Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dbce3b04-7364-4fd0-9d15-2303da4a7003 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.786689644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee6563a2-79fb-4783-9eae-d65f862e8720 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.786743615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee6563a2-79fb-4783-9eae-d65f862e8720 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.786965689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730717598609872957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730717587083622058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583
914338539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubern
etes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.ku
bernetes.pod.name: etcd-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee6563a2-79fb-4783-9eae-d65f862e8720 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.813400782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14de45e2-22ba-47a9-a2d1-09360a8a6ef8 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.813757488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14de45e2-22ba-47a9-a2d1-09360a8a6ef8 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.814890061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5d956b6-076b-4660-9315-ecec957b7c58 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.815337762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717972815318219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5d956b6-076b-4660-9315-ecec957b7c58 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.815864406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a1bcb70-fce2-4298-a18c-d4d16f4499f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.815949579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a1bcb70-fce2-4298-a18c-d4d16f4499f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.816197561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a1bcb70-fce2-4298-a18c-d4d16f4499f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.853914442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ebd8dbc4-7b72-4a79-8d92-8fbde156ba12 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.854045749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebd8dbc4-7b72-4a79-8d92-8fbde156ba12 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.855133108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=523f9bae-755c-4dc4-826a-a8a9fcf0254d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.855566050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717972855542370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=523f9bae-755c-4dc4-826a-a8a9fcf0254d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.856373450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24e48232-f7c2-425c-a7e6-ac7afa447e7b name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.856428167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24e48232-f7c2-425c-a7e6-ac7afa447e7b name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.856649327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24e48232-f7c2-425c-a7e6-ac7afa447e7b name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.900850124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3c28bb3-908e-4f96-93be-0372b317f8a7 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.900938986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3c28bb3-908e-4f96-93be-0372b317f8a7 name=/runtime.v1.RuntimeService/Version
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.902082356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbc4d8c9-493f-44b8-82d6-831c572ac6bb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.902596323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717972902569809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbc4d8c9-493f-44b8-82d6-831c572ac6bb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.903296716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec7b5cf6-dde2-4244-8a81-6b68b6492c79 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.903375776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec7b5cf6-dde2-4244-8a81-6b68b6492c79 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 10:59:32 ha-931571 crio[659]: time="2024-11-04 10:59:32.903727082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3,PodSandboxId:c376c65bb2b6ba1d92a006e61c82e1ca033b12c8a5bfc737dbac753ed4190360,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488,State:CONTAINER_EXITED,CreatedAt:1730717933792975882,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7bfae2f58ae7de463dba4b274c633ef,},Annotations:map[string]string{io.kubernetes.container.hash: 633bdfb,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc02a44b9547818a8aaa2b603bb97e4465acb589e9938089cc84862bb537651,PodSandboxId:ca422d1f835b462e7c44e7832053f6b8843511d5eeba3ced31c8b0b6f51661ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730717733201575265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nslmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68017266-8187-488d-ab36-2a5af294fa2e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457,PodSandboxId:c6e22705ccc1865b8bc5effb151c1f9d726558ad88b6a3bcf86428c0e051f88a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598667544377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s9wb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd497087-82a1-4173-a1ca-87f47225cd80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c,PodSandboxId:bcbca8745afa774e9251a00635a6a08e6f86c862db07fa69ac81ee2c0b157967,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730717598624298430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5ss4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1994bcf-ce9e-4a5e-90e0-5f3e284218f4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c,PodSandboxId:b15baa796a09ec04b514d2061ed59422516c1f7e4439ba3fcbebb73cbd3afa05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730717598609872957,Labels:ma
p[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb09a1d-0033-428a-a305-aa2901b20566,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0,PodSandboxId:220337aaf496c29271e7e054b3cdfea66b7c252c48cb49a49e7654fb61d21a91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CONTAINER_RUNNING,CreatedAt:173071758708362
2058,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2n2ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f43095ed-404a-4c99-a271-a8c7fb6a3559,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8,PodSandboxId:88e06a89dd6f22e1089e72d0e95bb740d4472413789aed6751e5201c34bce07d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730717583914338539,Labels:map[string]string{io.kub
ernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvk6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f293726-a3a3-4398-9b70-ca8f83c66d7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c,PodSandboxId:b36f0d25b985ad35c72d61e5d419af4761c0ed5584860b2c0eda0017653cfaa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730717572302806843,Labels:map[string]string{io.kubernetes.container.name: kube-
scheduler,io.kubernetes.pod.name: kube-scheduler-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04abf0ed929591b9a922eba9b45e06b4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc,PodSandboxId:9659e6073c7aea4a2bc7bbd2bc5081cfaf29c86595120748fa2b6d637cfd0405,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730717572280739492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-m
anager,io.kubernetes.pod.name: kube-controller-manager-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4685ec45b7a2365863fd185bc1066ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150,PodSandboxId:d779a632ccdcabf2a834569e1b03676bb2cb2ecac031cdb417048bfd227afd27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730717572221533934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-931571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488ad91ee064d442db18849afe83c778,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c,PodSandboxId:76529e2f353a6384d08c629e08edb56d628147ffb7c9b12a3b4fd7f6b94b2b61,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730717572176692911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-931571,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdade1472bd07799de85a7bf300c651f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec7b5cf6-dde2-4244-8a81-6b68b6492c79 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	801830521b8c6       77fa55e9c991e50f69a8af41fbbbe0cd8a6fa6fd87327b07ed933c1c02a4f488                                      39 seconds ago      Exited              kube-vip                  7                   c376c65bb2b6b       kube-vip-ha-931571
	ecc02a44b9547       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ca422d1f835b4       busybox-7dff88458-nslmz
	400aa38b53356       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   c6e22705ccc18       coredns-7c65d6cfc9-s9wb4
	49e75724c5ead       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   bcbca8745afa7       coredns-7c65d6cfc9-5ss4v
	f8efbd7a72ea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b15baa796a09e       storage-provisioner
	4401315f385bf       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   220337aaf496c       kindnet-2n2ws
	6e592fe17c5f7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   88e06a89dd6f2       kube-proxy-bvk6r
	e50ab0290e7c2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   b36f0d25b985a       kube-scheduler-ha-931571
	4572c8bcb28cd       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   9659e6073c7ae       kube-controller-manager-ha-931571
	82e4be064be10       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d779a632ccdca       kube-apiserver-ha-931571
	f2d32daf142ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   76529e2f353a6       etcd-ha-931571
	
	
	==> coredns [400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457] <==
	[INFO] 10.244.0.4:50237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150549s
	[INFO] 10.244.0.4:46253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001843568s
	[INFO] 10.244.0.4:55713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184256s
	[INFO] 10.244.0.4:40615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215052s
	[INFO] 10.244.0.4:48280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078576s
	[INFO] 10.244.0.4:54787 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130955s
	[INFO] 10.244.1.2:58741 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002139116s
	[INFO] 10.244.1.2:37960 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110836s
	[INFO] 10.244.1.2:58623 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109212s
	[INFO] 10.244.1.2:51618 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00158249s
	[INFO] 10.244.1.2:43015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087484s
	[INFO] 10.244.1.2:39492 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171988s
	[INFO] 10.244.2.2:48038 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132123s
	[INFO] 10.244.0.4:35814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180509s
	[INFO] 10.244.0.4:60410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089999s
	[INFO] 10.244.0.4:47053 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039998s
	[INFO] 10.244.1.2:58250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164547s
	[INFO] 10.244.1.2:52533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169574s
	[INFO] 10.244.2.2:44494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181065s
	[INFO] 10.244.2.2:58013 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023451s
	[INFO] 10.244.2.2:52479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131262s
	[INFO] 10.244.0.4:40569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209971s
	[INFO] 10.244.0.4:39524 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112991s
	[INFO] 10.244.0.4:47233 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143713s
	[INFO] 10.244.1.2:40992 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169174s
	
	
	==> coredns [49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48964 - 23647 "HINFO IN 8987446281611230695.8255749056578627230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085188681s
	[INFO] 10.244.2.2:34961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003596703s
	[INFO] 10.244.0.4:37004 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010865s
	[INFO] 10.244.0.4:53184 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001905017s
	[INFO] 10.244.1.2:58428 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083838s
	[INFO] 10.244.1.2:60855 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001943834s
	[INFO] 10.244.2.2:42530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210297s
	[INFO] 10.244.2.2:45691 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000254098s
	[INFO] 10.244.2.2:54453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116752s
	[INFO] 10.244.0.4:49389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000239128s
	[INFO] 10.244.0.4:50445 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078508s
	[INFO] 10.244.1.2:33136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123784s
	[INFO] 10.244.1.2:60974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079916s
	[INFO] 10.244.2.2:49080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171041s
	[INFO] 10.244.2.2:43340 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142924s
	[INFO] 10.244.2.2:43789 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094712s
	[INFO] 10.244.0.4:32943 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072704s
	[INFO] 10.244.1.2:50464 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118885s
	[INFO] 10.244.1.2:36951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148048s
	[INFO] 10.244.2.2:50644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135678s
	[INFO] 10.244.0.4:38496 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001483s
	[INFO] 10.244.1.2:59424 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211313s
	[INFO] 10.244.1.2:33660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134208s
	[INFO] 10.244.1.2:34489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138513s
	
	
	==> describe nodes <==
	Name:               ha-931571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T10_52_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:52:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:02 +0000   Mon, 04 Nov 2024 10:53:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-931571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5397aa0c862f4705b75b9757490651ea
	  System UUID:                5397aa0c-862f-4705-b75b-9757490651ea
	  Boot ID:                    17751c92-c71f-4e82-afb4-12da82035155
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nslmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 coredns-7c65d6cfc9-5ss4v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 coredns-7c65d6cfc9-s9wb4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 etcd-ha-931571                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m35s
	  kube-system                 kindnet-2n2ws                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-apiserver-ha-931571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-ha-931571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-proxy-bvk6r                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-scheduler-ha-931571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-vip-ha-931571                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m28s  kube-proxy       
	  Normal  Starting                 6m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s  kubelet          Node ha-931571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s  kubelet          Node ha-931571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s  kubelet          Node ha-931571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m31s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  NodeReady                6m15s  kubelet          Node ha-931571 status is now: NodeReady
	  Normal  RegisteredNode           5m36s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	  Normal  RegisteredNode           4m22s  node-controller  Node ha-931571 event: Registered Node ha-931571 in Controller
	
	
	Name:               ha-931571-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_53_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:53:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:56:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 04 Nov 2024 10:55:52 +0000   Mon, 04 Nov 2024 10:57:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-931571-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06772ff96588423e9dc77ed49845e534
	  System UUID:                06772ff9-6588-423e-9dc7-7ed49845e534
	  Boot ID:                    74d940a3-5941-40ed-b058-45da0bd2f171
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9wmp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 etcd-ha-931571-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m42s
	  kube-system                 kindnet-bg4z6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m44s
	  kube-system                 kube-apiserver-ha-931571-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-controller-manager-ha-931571-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-wz92s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-scheduler-ha-931571-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-vip-ha-931571-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m39s                  kube-proxy       
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m44s (x8 over 5m44s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s (x8 over 5m44s)  kubelet          Node ha-931571-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s (x7 over 5m44s)  kubelet          Node ha-931571-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-931571-m02 event: Registered Node ha-931571-m02 in Controller
	  Normal  NodeNotReady             2m12s                  node-controller  Node ha-931571-m02 status is now: NodeNotReady
	
	
	Name:               ha-931571-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_55_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:55:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:04 +0000   Mon, 04 Nov 2024 10:55:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-931571-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b21e133cd17b4b699323cc6d9f47f565
	  System UUID:                b21e133c-d17b-4b69-9323-cc6d9f47f565
	  Boot ID:                    50ec73f3-3253-4df5-83ed-277786faa385
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lqgb9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 etcd-ha-931571-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m28s
	  kube-system                 kindnet-w2jwt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m30s
	  kube-system                 kube-apiserver-ha-931571-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-controller-manager-ha-931571-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-proxy-ttq4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-scheduler-ha-931571-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-vip-ha-931571-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m30s                  cidrAllocator    Node ha-931571-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m30s (x8 over 4m31s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x8 over 4m31s)  kubelet          Node ha-931571-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x7 over 4m31s)  kubelet          Node ha-931571-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-931571-m03 event: Registered Node ha-931571-m03 in Controller
	
	
	Name:               ha-931571-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-931571-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=ha-931571
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_04T10_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 10:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-931571-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 10:59:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 10:56:36 +0000   Mon, 04 Nov 2024 10:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-931571-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 851b57db90dc4e65909090eed2536ea8
	  System UUID:                851b57db-90dc-4e65-9090-90eed2536ea8
	  Boot ID:                    be99e848-d7b5-4c3a-990d-5dd7890c841c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x8ptv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m27s
	  kube-system                 kube-proxy-s8gg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m27s                  cidrAllocator    Node ha-931571-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node ha-931571-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node ha-931571-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m26s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m26s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-931571-m04 event: Registered Node ha-931571-m04 in Controller
	  Normal  NodeReady                3m7s                   kubelet          Node ha-931571-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 4 10:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047726] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036586] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779631] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.763191] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.537421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.904587] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.060497] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062176] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.155966] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.126824] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.243725] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.719760] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.831679] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.057052] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249250] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.693317] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[Nov 4 10:53] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.046787] kauditd_printk_skb: 41 callbacks suppressed
	[ +27.005860] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c] <==
	{"level":"warn","ts":"2024-11-04T10:59:33.016035Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.022822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.122784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.172360Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.185191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.189812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.206204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.214057Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.222154Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.222355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.226558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.230316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.242547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.256302Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.245:2380/version","remote-member-id":"df641d035a901564","error":"Get \"https://192.168.39.245:2380/version\": dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-11-04T10:59:33.256422Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"df641d035a901564","error":"Get \"https://192.168.39.245:2380/version\": dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-11-04T10:59:33.321069Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.322545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.326956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.331957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.335644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.338837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.342959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.348640Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.355518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-04T10:59:33.371914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"df641d035a901564","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:59:33 up 7 min,  0 users,  load average: 0.18, 0.30, 0.15
	Linux ha-931571 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0] <==
	I1104 10:58:57.933029       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.925895       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:07.925959       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:07.926150       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:07.926172       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:07.926258       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:07.926276       1 main.go:301] handling current node
	I1104 10:59:07.926287       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:07.926292       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:59:17.932116       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:17.932223       1 main.go:301] handling current node
	I1104 10:59:17.932253       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:17.932271       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:59:17.932486       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:17.932519       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:17.932614       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:17.932635       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	I1104 10:59:27.935299       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1104 10:59:27.935328       1 main.go:301] handling current node
	I1104 10:59:27.935342       1 main.go:297] Handling node with IPs: map[192.168.39.245:{}]
	I1104 10:59:27.935346       1 main.go:324] Node ha-931571-m02 has CIDR [10.244.1.0/24] 
	I1104 10:59:27.935514       1 main.go:297] Handling node with IPs: map[192.168.39.57:{}]
	I1104 10:59:27.935519       1 main.go:324] Node ha-931571-m03 has CIDR [10.244.2.0/24] 
	I1104 10:59:27.935599       1 main.go:297] Handling node with IPs: map[192.168.39.237:{}]
	I1104 10:59:27.935604       1 main.go:324] Node ha-931571-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150] <==
	I1104 10:52:57.529011       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1104 10:52:57.636067       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1104 10:52:58.624832       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1104 10:52:58.639937       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1104 10:52:58.805171       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1104 10:53:03.087294       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1104 10:53:03.287753       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1104 10:53:50.685836       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="2a13690c-2b7c-4af7-94a1-2fcd1065da04"
	E1104 10:53:50.685933       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.903µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1104 10:55:34.753652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57932: use of closed network connection
	E1104 10:55:34.925834       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57948: use of closed network connection
	E1104 10:55:35.093653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57972: use of closed network connection
	E1104 10:55:35.274875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57992: use of closed network connection
	E1104 10:55:35.447438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58008: use of closed network connection
	E1104 10:55:35.612882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58018: use of closed network connection
	E1104 10:55:35.778454       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58044: use of closed network connection
	E1104 10:55:35.949313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58070: use of closed network connection
	E1104 10:55:36.116046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58086: use of closed network connection
	E1104 10:55:36.394559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58120: use of closed network connection
	E1104 10:55:36.560067       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58130: use of closed network connection
	E1104 10:55:36.741903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58146: use of closed network connection
	E1104 10:55:36.920290       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58160: use of closed network connection
	E1104 10:55:37.097281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58172: use of closed network connection
	E1104 10:55:37.276505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58204: use of closed network connection
	W1104 10:57:07.528371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.57 192.168.39.67]
	
	
	==> kube-controller-manager [4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc] <==
	I1104 10:56:02.327738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571"
	I1104 10:56:04.592818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m03"
	I1104 10:56:06.541409       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-931571-m04\" does not exist"
	I1104 10:56:06.575948       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-931571-m04" podCIDRs=["10.244.3.0/24"]
	I1104 10:56:06.576008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.576040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:06.730053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.090693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:07.683331       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-931571-m04"
	I1104 10:56:07.724925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.198433       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:11.234463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:16.862581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.184900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:56:26.200074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:26.386370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:56:36.943150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m04"
	I1104 10:57:21.411213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-931571-m04"
	I1104 10:57:21.411471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.433152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:21.545878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.838445ms"
	I1104 10:57:21.546123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.292µs"
	I1104 10:57:22.718407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	I1104 10:57:26.623482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-931571-m02"
	
	
	==> kube-proxy [6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 10:53:04.203851       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 10:53:04.229581       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E1104 10:53:04.229781       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 10:53:04.282192       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 10:53:04.282221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 10:53:04.282244       1 server_linux.go:169] "Using iptables Proxier"
	I1104 10:53:04.285593       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 10:53:04.285958       1 server.go:483] "Version info" version="v1.31.2"
	I1104 10:53:04.285985       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 10:53:04.288139       1 config.go:199] "Starting service config controller"
	I1104 10:53:04.288173       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 10:53:04.290392       1 config.go:105] "Starting endpoint slice config controller"
	I1104 10:53:04.290557       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 10:53:04.291547       1 config.go:328] "Starting node config controller"
	I1104 10:53:04.292932       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 10:53:04.389214       1 shared_informer.go:320] Caches are synced for service config
	I1104 10:53:04.391802       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 10:53:04.393273       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c] <==
	W1104 10:52:57.001881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1104 10:52:57.001927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.141748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1104 10:52:57.141796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 10:52:57.201248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1104 10:52:57.201310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1104 10:52:58.585064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 10:55:30.513828       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="641f6861-b035-49a8-832b-70b7a069afb3" pod="default/busybox-7dff88458-lqgb9" assumedNode="ha-931571-m03" currentNode="ha-931571-m02"
	E1104 10:55:30.530615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m02"
	E1104 10:55:30.530773       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 641f6861-b035-49a8-832b-70b7a069afb3(default/busybox-7dff88458-lqgb9) was assumed on ha-931571-m02 but assigned to ha-931571-m03" pod="default/busybox-7dff88458-lqgb9"
	E1104 10:55:30.530821       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lqgb9\": pod busybox-7dff88458-lqgb9 is already assigned to node \"ha-931571-m03\"" pod="default/busybox-7dff88458-lqgb9"
	I1104 10:55:30.530854       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lqgb9" node="ha-931571-m03"
	E1104 10:55:30.571464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572521       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 68017266-8187-488d-ab36-2a5af294fa2e(default/busybox-7dff88458-nslmz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nslmz"
	E1104 10:55:30.572641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nslmz\": pod busybox-7dff88458-nslmz is already assigned to node \"ha-931571\"" pod="default/busybox-7dff88458-nslmz"
	I1104 10:55:30.572740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nslmz" node="ha-931571"
	E1104 10:55:30.572411       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.573133       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 84b6e653-b685-4c00-ac2f-d650738a613b(default/busybox-7dff88458-w9wmp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9wmp"
	E1104 10:55:30.573206       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9wmp\": pod busybox-7dff88458-w9wmp is already assigned to node \"ha-931571-m02\"" pod="default/busybox-7dff88458-w9wmp"
	I1104 10:55:30.573228       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9wmp" node="ha-931571-m02"
	E1104 10:55:30.792999       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-5nt9m\" not found" pod="default/busybox-7dff88458-5nt9m"
	E1104 10:56:06.602004       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	E1104 10:56:06.602261       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c786786d-b4b5-4479-b5df-24cc8f346e86(kube-system/kube-proxy-s8gg7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s8gg7"
	E1104 10:56:06.602358       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s8gg7\": pod kube-proxy-s8gg7 is already assigned to node \"ha-931571-m04\"" pod="kube-system/kube-proxy-s8gg7"
	I1104 10:56:06.602540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s8gg7" node="ha-931571-m04"
	
	
	==> kubelet <==
	Nov 04 10:58:42 ha-931571 kubelet[1360]: I1104 10:58:42.786581    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:42 ha-931571 kubelet[1360]: E1104 10:58:42.791316    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872774    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:48 ha-931571 kubelet[1360]: E1104 10:58:48.872859    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717928872476228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:53 ha-931571 kubelet[1360]: I1104 10:58:53.785072    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.819237    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 10:58:58 ha-931571 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 10:58:58 ha-931571 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874071    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:58:58 ha-931571 kubelet[1360]: E1104 10:58:58.874093    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717938873867782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.144622    1360 scope.go:117] "RemoveContainer" containerID="9b0c4137e04d5572b1e0277210028adf86df482f6a6a6a6a724bf176e285ca2f"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: I1104 10:59:00.145089    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:00 ha-931571 kubelet[1360]: E1104 10:59:00.145270    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878363    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:08 ha-931571 kubelet[1360]: E1104 10:59:08.878627    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717948875635760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: I1104 10:59:14.786026    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:14 ha-931571 kubelet[1360]: E1104 10:59:14.786168    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:18 ha-931571 kubelet[1360]: E1104 10:59:18.881691    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717958881254516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:18 ha-931571 kubelet[1360]: E1104 10:59:18.881729    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717958881254516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:28 ha-931571 kubelet[1360]: I1104 10:59:28.785774    1360 scope.go:117] "RemoveContainer" containerID="801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	Nov 04 10:59:28 ha-931571 kubelet[1360]: E1104 10:59:28.785992    1360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-931571_kube-system(d7bfae2f58ae7de463dba4b274c633ef)\"" pod="kube-system/kube-vip-ha-931571" podUID="d7bfae2f58ae7de463dba4b274c633ef"
	Nov 04 10:59:28 ha-931571 kubelet[1360]: E1104 10:59:28.885027    1360 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717968883394320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 10:59:28 ha-931571 kubelet[1360]: E1104 10:59:28.885082    1360 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730717968883394320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156098,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:261: (dbg) Run:  kubectl --context ha-931571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (416.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-931571 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-931571 -v=7 --alsologtostderr
E1104 10:59:47.409581   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:01:33.165192   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-931571 -v=7 --alsologtostderr: exit status 82 (2m1.728451823s)

                                                
                                                
-- stdout --
	* Stopping node "ha-931571-m04"  ...
	* Stopping node "ha-931571-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 10:59:34.403097   42995 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:59:34.403227   42995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:59:34.403238   42995 out.go:358] Setting ErrFile to fd 2...
	I1104 10:59:34.403243   42995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:59:34.403433   42995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:59:34.403658   42995 out.go:352] Setting JSON to false
	I1104 10:59:34.403767   42995 mustload.go:65] Loading cluster: ha-931571
	I1104 10:59:34.404178   42995 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:59:34.404285   42995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 10:59:34.404481   42995 mustload.go:65] Loading cluster: ha-931571
	I1104 10:59:34.404636   42995 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:59:34.404677   42995 stop.go:39] StopHost: ha-931571-m04
	I1104 10:59:34.405038   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:59:34.405109   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:59:34.419891   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46155
	I1104 10:59:34.420367   42995 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:59:34.420925   42995 main.go:141] libmachine: Using API Version  1
	I1104 10:59:34.420942   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:59:34.421279   42995 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:59:34.424940   42995 out.go:177] * Stopping node "ha-931571-m04"  ...
	I1104 10:59:34.426502   42995 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1104 10:59:34.426532   42995 main.go:141] libmachine: (ha-931571-m04) Calling .DriverName
	I1104 10:59:34.426781   42995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1104 10:59:34.426808   42995 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHHostname
	I1104 10:59:34.430025   42995 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 10:59:34.430480   42995 main.go:141] libmachine: (ha-931571-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:27:aa", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:55:52 +0000 UTC Type:0 Mac:52:54:00:16:27:aa Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-931571-m04 Clientid:01:52:54:00:16:27:aa}
	I1104 10:59:34.430511   42995 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 10:59:34.430775   42995 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHPort
	I1104 10:59:34.430976   42995 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHKeyPath
	I1104 10:59:34.431142   42995 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHUsername
	I1104 10:59:34.431281   42995 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m04/id_rsa Username:docker}
	I1104 10:59:34.510962   42995 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1104 10:59:34.563835   42995 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1104 10:59:34.617451   42995 main.go:141] libmachine: Stopping "ha-931571-m04"...
	I1104 10:59:34.617482   42995 main.go:141] libmachine: (ha-931571-m04) Calling .GetState
	I1104 10:59:34.619304   42995 main.go:141] libmachine: (ha-931571-m04) Calling .Stop
	I1104 10:59:34.622947   42995 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 0/120
	I1104 10:59:35.678818   42995 main.go:141] libmachine: (ha-931571-m04) Calling .GetState
	I1104 10:59:35.680140   42995 main.go:141] libmachine: Machine "ha-931571-m04" was stopped.
	I1104 10:59:35.680158   42995 stop.go:75] duration metric: took 1.25365723s to stop
	I1104 10:59:35.680194   42995 stop.go:39] StopHost: ha-931571-m03
	I1104 10:59:35.680492   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:59:35.680542   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:59:35.694844   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I1104 10:59:35.695272   42995 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:59:35.695772   42995 main.go:141] libmachine: Using API Version  1
	I1104 10:59:35.695791   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:59:35.696125   42995 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:59:35.698261   42995 out.go:177] * Stopping node "ha-931571-m03"  ...
	I1104 10:59:35.699892   42995 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1104 10:59:35.699921   42995 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 10:59:35.700128   42995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1104 10:59:35.700149   42995 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 10:59:35.702803   42995 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:59:35.703262   42995 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:54:28 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 10:59:35.703302   42995 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 10:59:35.703440   42995 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 10:59:35.703597   42995 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 10:59:35.703715   42995 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 10:59:35.703817   42995 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 10:59:35.783546   42995 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1104 10:59:35.835709   42995 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1104 10:59:35.888386   42995 main.go:141] libmachine: Stopping "ha-931571-m03"...
	I1104 10:59:35.888426   42995 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 10:59:35.889943   42995 main.go:141] libmachine: (ha-931571-m03) Calling .Stop
	I1104 10:59:35.893759   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 0/120
	I1104 10:59:36.895745   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 1/120
	I1104 10:59:37.897135   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 2/120
	I1104 10:59:38.898492   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 3/120
	I1104 10:59:39.899754   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 4/120
	I1104 10:59:40.901680   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 5/120
	I1104 10:59:41.904008   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 6/120
	I1104 10:59:42.905420   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 7/120
	I1104 10:59:43.906774   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 8/120
	I1104 10:59:44.908088   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 9/120
	I1104 10:59:45.909909   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 10/120
	I1104 10:59:46.911662   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 11/120
	I1104 10:59:47.913316   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 12/120
	I1104 10:59:48.915178   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 13/120
	I1104 10:59:49.916580   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 14/120
	I1104 10:59:50.918509   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 15/120
	I1104 10:59:51.919671   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 16/120
	I1104 10:59:52.921066   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 17/120
	I1104 10:59:53.922620   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 18/120
	I1104 10:59:54.924144   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 19/120
	I1104 10:59:55.926104   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 20/120
	I1104 10:59:56.927453   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 21/120
	I1104 10:59:57.928953   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 22/120
	I1104 10:59:58.930534   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 23/120
	I1104 10:59:59.932089   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 24/120
	I1104 11:00:00.933965   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 25/120
	I1104 11:00:01.935451   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 26/120
	I1104 11:00:02.936976   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 27/120
	I1104 11:00:03.938425   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 28/120
	I1104 11:00:04.939884   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 29/120
	I1104 11:00:05.941584   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 30/120
	I1104 11:00:06.943039   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 31/120
	I1104 11:00:07.944463   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 32/120
	I1104 11:00:08.945881   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 33/120
	I1104 11:00:09.948007   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 34/120
	I1104 11:00:10.949797   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 35/120
	I1104 11:00:11.951565   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 36/120
	I1104 11:00:12.953344   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 37/120
	I1104 11:00:13.954545   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 38/120
	I1104 11:00:14.956195   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 39/120
	I1104 11:00:15.958064   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 40/120
	I1104 11:00:16.959294   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 41/120
	I1104 11:00:17.960698   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 42/120
	I1104 11:00:18.961871   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 43/120
	I1104 11:00:19.963309   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 44/120
	I1104 11:00:20.965183   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 45/120
	I1104 11:00:21.966532   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 46/120
	I1104 11:00:22.967956   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 47/120
	I1104 11:00:23.969123   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 48/120
	I1104 11:00:24.970684   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 49/120
	I1104 11:00:25.972469   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 50/120
	I1104 11:00:26.973889   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 51/120
	I1104 11:00:27.975448   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 52/120
	I1104 11:00:28.977314   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 53/120
	I1104 11:00:29.978585   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 54/120
	I1104 11:00:30.980516   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 55/120
	I1104 11:00:31.981922   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 56/120
	I1104 11:00:32.983141   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 57/120
	I1104 11:00:33.984550   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 58/120
	I1104 11:00:34.985985   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 59/120
	I1104 11:00:35.987876   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 60/120
	I1104 11:00:36.989108   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 61/120
	I1104 11:00:37.990327   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 62/120
	I1104 11:00:38.991614   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 63/120
	I1104 11:00:39.993462   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 64/120
	I1104 11:00:40.995150   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 65/120
	I1104 11:00:41.996444   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 66/120
	I1104 11:00:42.997861   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 67/120
	I1104 11:00:43.999214   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 68/120
	I1104 11:00:45.000572   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 69/120
	I1104 11:00:46.002712   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 70/120
	I1104 11:00:47.003933   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 71/120
	I1104 11:00:48.005110   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 72/120
	I1104 11:00:49.006621   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 73/120
	I1104 11:00:50.007821   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 74/120
	I1104 11:00:51.009535   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 75/120
	I1104 11:00:52.010747   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 76/120
	I1104 11:00:53.012198   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 77/120
	I1104 11:00:54.013633   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 78/120
	I1104 11:00:55.014989   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 79/120
	I1104 11:00:56.016811   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 80/120
	I1104 11:00:57.018092   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 81/120
	I1104 11:00:58.019563   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 82/120
	I1104 11:00:59.021045   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 83/120
	I1104 11:01:00.022337   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 84/120
	I1104 11:01:01.024586   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 85/120
	I1104 11:01:02.026020   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 86/120
	I1104 11:01:03.027664   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 87/120
	I1104 11:01:04.029162   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 88/120
	I1104 11:01:05.030478   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 89/120
	I1104 11:01:06.032194   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 90/120
	I1104 11:01:07.033779   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 91/120
	I1104 11:01:08.035245   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 92/120
	I1104 11:01:09.037051   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 93/120
	I1104 11:01:10.038502   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 94/120
	I1104 11:01:11.040035   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 95/120
	I1104 11:01:12.041942   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 96/120
	I1104 11:01:13.043349   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 97/120
	I1104 11:01:14.044988   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 98/120
	I1104 11:01:15.046455   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 99/120
	I1104 11:01:16.048444   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 100/120
	I1104 11:01:17.049830   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 101/120
	I1104 11:01:18.051849   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 102/120
	I1104 11:01:19.053357   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 103/120
	I1104 11:01:20.054738   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 104/120
	I1104 11:01:21.057084   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 105/120
	I1104 11:01:22.058349   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 106/120
	I1104 11:01:23.060102   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 107/120
	I1104 11:01:24.061314   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 108/120
	I1104 11:01:25.062690   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 109/120
	I1104 11:01:26.064558   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 110/120
	I1104 11:01:27.066123   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 111/120
	I1104 11:01:28.067474   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 112/120
	I1104 11:01:29.068804   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 113/120
	I1104 11:01:30.070366   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 114/120
	I1104 11:01:31.072012   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 115/120
	I1104 11:01:32.073375   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 116/120
	I1104 11:01:33.074728   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 117/120
	I1104 11:01:34.076101   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 118/120
	I1104 11:01:35.077598   42995 main.go:141] libmachine: (ha-931571-m03) Waiting for machine to stop 119/120
	I1104 11:01:36.078807   42995 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1104 11:01:36.078868   42995 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1104 11:01:36.081076   42995 out.go:201] 
	W1104 11:01:36.082707   42995 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1104 11:01:36.082728   42995 out.go:270] * 
	* 
	W1104 11:01:36.085286   42995 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 11:01:36.086649   42995 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-931571 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-931571 --wait=true -v=7 --alsologtostderr
E1104 11:02:00.868545   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:04:47.409490   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:06:10.473807   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-931571 --wait=true -v=7 --alsologtostderr: (4m52.09282034s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-931571
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.983440504s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-931571 node start m02 -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571 -v=7                                                           | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-931571 -v=7                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-931571 --wait=true -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:01 UTC | 04 Nov 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:01:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:01:36.135689   43487 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:01:36.135831   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.135841   43487 out.go:358] Setting ErrFile to fd 2...
	I1104 11:01:36.135848   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.136026   43487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:01:36.136622   43487 out.go:352] Setting JSON to false
	I1104 11:01:36.137570   43487 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6247,"bootTime":1730711849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:01:36.137665   43487 start.go:139] virtualization: kvm guest
	I1104 11:01:36.140736   43487 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:01:36.142255   43487 notify.go:220] Checking for updates...
	I1104 11:01:36.142280   43487 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:01:36.143792   43487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:01:36.145520   43487 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:01:36.147024   43487 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:01:36.148374   43487 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:01:36.150002   43487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:01:36.151746   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.151854   43487 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:01:36.152270   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.152323   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.167782   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1104 11:01:36.168314   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.168871   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.168896   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.169315   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.169538   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.206070   43487 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:01:36.207361   43487 start.go:297] selected driver: kvm2
	I1104 11:01:36.207389   43487 start.go:901] validating driver "kvm2" against &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.207518   43487 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:01:36.207957   43487 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.208077   43487 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:01:36.225111   43487 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:01:36.225913   43487 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:01:36.225946   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:01:36.225978   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:01:36.226027   43487 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.226141   43487 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.228347   43487 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 11:01:36.229829   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:01:36.229870   43487 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:01:36.229878   43487 cache.go:56] Caching tarball of preloaded images
	I1104 11:01:36.229952   43487 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:01:36.229964   43487 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:01:36.230064   43487 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 11:01:36.230320   43487 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:01:36.230371   43487 start.go:364] duration metric: took 27.926µs to acquireMachinesLock for "ha-931571"
	I1104 11:01:36.230386   43487 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:01:36.230395   43487 fix.go:54] fixHost starting: 
	I1104 11:01:36.230733   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.230769   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.245984   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I1104 11:01:36.246433   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.246934   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.246955   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.247232   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.247395   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.247568   43487 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:01:36.249147   43487 fix.go:112] recreateIfNeeded on ha-931571: state=Running err=<nil>
	W1104 11:01:36.249199   43487 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:01:36.251132   43487 out.go:177] * Updating the running kvm2 "ha-931571" VM ...
	I1104 11:01:36.252516   43487 machine.go:93] provisionDockerMachine start ...
	I1104 11:01:36.252546   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.252780   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.255202   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255594   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.255616   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255731   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.255890   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256009   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256140   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.256308   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.256489   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.256500   43487 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:01:36.361800   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.361835   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362053   43487 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 11:01:36.362076   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362273   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.365086   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365550   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.365581   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365735   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.365939   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366072   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366277   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.366448   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.366691   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.366706   43487 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 11:01:36.493768   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.493790   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.496511   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.496961   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.496984   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.497265   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.497539   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497705   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497875   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.498037   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.498202   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.498219   43487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:01:36.610606   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:01:36.610641   43487 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:01:36.610661   43487 buildroot.go:174] setting up certificates
	I1104 11:01:36.610669   43487 provision.go:84] configureAuth start
	I1104 11:01:36.610679   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.610955   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:01:36.613714   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614200   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.614230   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614349   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.616882   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617334   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.617361   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617589   43487 provision.go:143] copyHostCerts
	I1104 11:01:36.617626   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617677   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:01:36.617689   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617752   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:01:36.617831   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617850   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:01:36.617854   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617877   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:01:36.617923   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617936   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:01:36.617943   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617965   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:01:36.618012   43487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 11:01:36.828436   43487 provision.go:177] copyRemoteCerts
	I1104 11:01:36.828491   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:01:36.828512   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.830991   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831347   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.831368   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831530   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.831721   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.831867   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.831960   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:01:36.915764   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:01:36.915847   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:01:36.939587   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:01:36.939667   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 11:01:36.963061   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:01:36.963124   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 11:01:36.986150   43487 provision.go:87] duration metric: took 375.467362ms to configureAuth
	I1104 11:01:36.986177   43487 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:01:36.986415   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.986508   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.988810   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989158   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.989186   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989401   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.989591   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989752   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989860   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.989990   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.990180   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.990196   43487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:03:07.637315   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:03:07.637355   43487 machine.go:96] duration metric: took 1m31.384824491s to provisionDockerMachine
	I1104 11:03:07.637369   43487 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 11:03:07.637384   43487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:03:07.637404   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.637761   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:03:07.637793   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.640901   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641365   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.641386   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641580   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.641782   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.641937   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.642057   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.723354   43487 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:03:07.727749   43487 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:03:07.727790   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:03:07.727866   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:03:07.727978   43487 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:03:07.727992   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:03:07.728104   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:03:07.737590   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:07.760844   43487 start.go:296] duration metric: took 123.46114ms for postStartSetup
	I1104 11:03:07.760883   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.761154   43487 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1104 11:03:07.761179   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.763801   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764219   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.764250   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764422   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.764610   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.764765   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.764923   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	W1104 11:03:07.847152   43487 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1104 11:03:07.847183   43487 fix.go:56] duration metric: took 1m31.616787199s for fixHost
	I1104 11:03:07.847210   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.849780   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850080   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.850103   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850285   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.850444   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850572   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850663   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.850778   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:03:07.850921   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:03:07.850932   43487 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:03:07.957716   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730718187.926175534
	
	I1104 11:03:07.957740   43487 fix.go:216] guest clock: 1730718187.926175534
	I1104 11:03:07.957749   43487 fix.go:229] Guest: 2024-11-04 11:03:07.926175534 +0000 UTC Remote: 2024-11-04 11:03:07.847191367 +0000 UTC m=+91.749611169 (delta=78.984167ms)
	I1104 11:03:07.957775   43487 fix.go:200] guest clock delta is within tolerance: 78.984167ms
	I1104 11:03:07.957780   43487 start.go:83] releasing machines lock for "ha-931571", held for 1m31.727399754s
	I1104 11:03:07.957797   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.958011   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:07.960277   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960596   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.960623   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960746   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961392   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961589   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961682   43487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:03:07.961744   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.961789   43487 ssh_runner.go:195] Run: cat /version.json
	I1104 11:03:07.961812   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.964564   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964779   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964935   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.964958   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965102   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.965115   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965127   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965307   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965321   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965465   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965475   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965612   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965607   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.965735   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:08.062470   43487 ssh_runner.go:195] Run: systemctl --version
	I1104 11:03:08.068016   43487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:03:08.217034   43487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:03:08.225627   43487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:03:08.225681   43487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:03:08.234588   43487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:03:08.234609   43487 start.go:495] detecting cgroup driver to use...
	I1104 11:03:08.234668   43487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:03:08.250011   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:03:08.263678   43487 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:03:08.263727   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:03:08.276778   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:03:08.289631   43487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:03:08.436219   43487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:03:08.580306   43487 docker.go:233] disabling docker service ...
	I1104 11:03:08.580381   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:03:08.598849   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:03:08.611846   43487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:03:08.752818   43487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:03:08.900497   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:03:08.913868   43487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:03:08.931418   43487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:03:08.931481   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.942464   43487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:03:08.942519   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.952702   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.963648   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.973838   43487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:03:08.984434   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.995143   43487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.005343   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.015650   43487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:03:09.024728   43487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:03:09.034012   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:09.180518   43487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:03:19.158217   43487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.977660206s)
	I1104 11:03:19.158256   43487 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:03:19.158312   43487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:03:19.163030   43487 start.go:563] Will wait 60s for crictl version
	I1104 11:03:19.163087   43487 ssh_runner.go:195] Run: which crictl
	I1104 11:03:19.166614   43487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:03:19.198130   43487 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:03:19.198200   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.225725   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.256273   43487 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:03:19.257947   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:19.260526   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.260966   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:19.260989   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.261303   43487 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:03:19.265771   43487 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:03:19.265898   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:03:19.265937   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.311790   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.311812   43487 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:03:19.311863   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.345725   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.345751   43487 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:03:19.345760   43487 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 11:03:19.345861   43487 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:03:19.345923   43487 ssh_runner.go:195] Run: crio config
	I1104 11:03:19.399886   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:03:19.399909   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:03:19.399922   43487 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:03:19.399956   43487 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:03:19.400106   43487 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:03:19.400126   43487 kube-vip.go:115] generating kube-vip config ...
	I1104 11:03:19.400180   43487 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 11:03:19.411359   43487 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 11:03:19.411489   43487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 11:03:19.411549   43487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:03:19.420430   43487 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:03:19.420500   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 11:03:19.429659   43487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 11:03:19.445912   43487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:03:19.461851   43487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 11:03:19.478119   43487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 11:03:19.494678   43487 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 11:03:19.499089   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:19.639880   43487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:03:19.653539   43487 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 11:03:19.653562   43487 certs.go:194] generating shared ca certs ...
	I1104 11:03:19.653579   43487 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.653721   43487 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:03:19.653775   43487 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:03:19.653788   43487 certs.go:256] generating profile certs ...
	I1104 11:03:19.653877   43487 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 11:03:19.653912   43487 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0
	I1104 11:03:19.653933   43487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 11:03:19.885027   43487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 ...
	I1104 11:03:19.885059   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0: {Name:mk69f57313434af2e91ed33999be6969db1655d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885262   43487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 ...
	I1104 11:03:19.885278   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0: {Name:mk036af60f5877bd7b54bd0649ec2229ae064452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885373   43487 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 11:03:19.885549   43487 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 11:03:19.885706   43487 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 11:03:19.885722   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:03:19.885740   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:03:19.885756   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:03:19.885778   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:03:19.885796   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:03:19.885822   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:03:19.885840   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:03:19.885858   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:03:19.885925   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:03:19.885964   43487 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:03:19.885979   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:03:19.886014   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:03:19.886046   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:03:19.886078   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:03:19.886131   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:19.886172   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:03:19.886192   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:19.886211   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:03:19.886765   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:03:19.911799   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:03:19.934788   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:03:19.958214   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:03:19.982177   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 11:03:20.006762   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:03:20.032203   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:03:20.056775   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:03:20.081730   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:03:20.107796   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:03:20.132535   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:03:20.157941   43487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:03:20.175127   43487 ssh_runner.go:195] Run: openssl version
	I1104 11:03:20.180513   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:03:20.192022   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196624   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196676   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.202253   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:03:20.212464   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:03:20.224808   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229606   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229653   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.235319   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:03:20.246230   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:03:20.258556   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263132   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263190   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.268984   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:03:20.279291   43487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:03:20.283601   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:03:20.288948   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:03:20.294474   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:03:20.299807   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:03:20.305182   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:03:20.310975   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:03:20.316451   43487 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:03:20.316562   43487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:03:20.316594   43487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:03:20.354362   43487 cri.go:89] found id: "4c3aa3719ea407f31bd76c40125ab3b7bdd92ee408b1f5e698e57298fb7c8bf5"
	I1104 11:03:20.354384   43487 cri.go:89] found id: "b93e0586789e3f2dc0a6a83e13dc87e97cd99bac979bcedff72518a08f43e152"
	I1104 11:03:20.354387   43487 cri.go:89] found id: "801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	I1104 11:03:20.354390   43487 cri.go:89] found id: "400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457"
	I1104 11:03:20.354393   43487 cri.go:89] found id: "49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c"
	I1104 11:03:20.354395   43487 cri.go:89] found id: "f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c"
	I1104 11:03:20.354402   43487 cri.go:89] found id: "4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0"
	I1104 11:03:20.354404   43487 cri.go:89] found id: "6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8"
	I1104 11:03:20.354408   43487 cri.go:89] found id: "e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c"
	I1104 11:03:20.354425   43487 cri.go:89] found id: "4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc"
	I1104 11:03:20.354434   43487 cri.go:89] found id: "82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150"
	I1104 11:03:20.354436   43487 cri.go:89] found id: "f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c"
	I1104 11:03:20.354439   43487 cri.go:89] found id: ""
	I1104 11:03:20.354477   43487 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:261: (dbg) Run:  kubectl --context ha-931571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (416.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (173.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 node delete m03 -v=7 --alsologtostderr
E1104 11:06:33.164849   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-931571 node delete m03 -v=7 --alsologtostderr: exit status 80 (1m54.794674386s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster ha-931571
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:06:30.863635   45068 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:06:30.863910   45068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:06:30.863921   45068 out.go:358] Setting ErrFile to fd 2...
	I1104 11:06:30.863925   45068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:06:30.864077   45068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:06:30.864316   45068 mustload.go:65] Loading cluster: ha-931571
	I1104 11:06:30.864711   45068 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:06:30.865056   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:30.865092   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:30.880547   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I1104 11:06:30.880944   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:30.881537   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:30.881556   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:30.881987   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:30.882160   45068 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:06:30.883588   45068 host.go:66] Checking if "ha-931571" exists ...
	I1104 11:06:30.883865   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:30.883897   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:30.898862   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1104 11:06:30.899319   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:30.899932   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:30.899961   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:30.900292   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:30.900469   45068 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:06:30.900915   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:30.900948   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:30.914911   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33045
	I1104 11:06:30.915345   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:30.915791   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:30.915811   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:30.916091   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:30.916278   45068 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 11:06:30.917735   45068 host.go:66] Checking if "ha-931571-m02" exists ...
	I1104 11:06:30.918032   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:30.918072   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:30.932804   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I1104 11:06:30.933221   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:30.933733   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:30.933754   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:30.934118   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:30.934308   45068 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 11:06:30.934964   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:30.935007   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:30.949634   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I1104 11:06:30.950094   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:30.950678   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:30.950703   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:30.951078   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:30.951266   45068 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 11:06:30.952751   45068 host.go:66] Checking if "ha-931571-m03" exists ...
	I1104 11:06:30.953057   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:30.953103   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:30.968880   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
	I1104 11:06:30.969256   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:30.969782   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:30.969800   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:30.970103   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:30.970275   45068 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 11:06:30.970419   45068 api_server.go:166] Checking apiserver status ...
	I1104 11:06:30.970476   45068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:06:30.970508   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:06:30.973379   45068 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:06:30.973844   45068 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:06:30.973871   45068 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:06:30.974012   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:06:30.974174   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:06:30.974297   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:06:30.974416   45068 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:06:31.060883   45068 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5476/cgroup
	W1104 11:06:31.074698   45068 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5476/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:06:31.074772   45068 ssh_runner.go:195] Run: ls
	I1104 11:06:31.078966   45068 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 11:06:31.085205   45068 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 11:06:31.087113   45068 out.go:177] * Deleting node m03 from cluster ha-931571
	I1104 11:06:31.088397   45068 host.go:66] Checking if "ha-931571-m03" exists ...
	I1104 11:06:31.088688   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:31.088722   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:31.106783   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I1104 11:06:31.107276   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:31.107791   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:31.107815   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:31.108092   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:31.108281   45068 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 11:06:31.108396   45068 mustload.go:65] Loading cluster: ha-931571
	I1104 11:06:31.108577   45068 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:06:31.108824   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:31.108854   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:31.123641   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37749
	I1104 11:06:31.124017   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:31.124798   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:31.124819   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:31.125118   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:31.125294   45068 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:06:31.126837   45068 host.go:66] Checking if "ha-931571" exists ...
	I1104 11:06:31.127150   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:31.127181   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:31.142213   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I1104 11:06:31.142661   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:31.143090   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:31.143113   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:31.143487   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:31.143683   45068 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:06:31.144115   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:31.144174   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:31.159889   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I1104 11:06:31.160335   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:31.160804   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:31.160827   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:31.161116   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:31.161303   45068 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 11:06:31.163095   45068 host.go:66] Checking if "ha-931571-m02" exists ...
	I1104 11:06:31.163538   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:31.163589   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:31.179092   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36949
	I1104 11:06:31.179518   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:31.180009   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:31.180028   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:31.180320   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:31.180488   45068 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 11:06:31.181098   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:31.181173   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:31.195703   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I1104 11:06:31.196166   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:31.196692   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:31.196710   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:31.197017   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:31.197201   45068 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 11:06:31.199254   45068 host.go:66] Checking if "ha-931571-m03" exists ...
	I1104 11:06:31.199572   45068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:06:31.199619   45068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:06:31.215132   45068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1104 11:06:31.215539   45068 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:06:31.215963   45068 main.go:141] libmachine: Using API Version  1
	I1104 11:06:31.215978   45068 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:06:31.216316   45068 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:06:31.216492   45068 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 11:06:31.216634   45068 api_server.go:166] Checking apiserver status ...
	I1104 11:06:31.216683   45068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:06:31.216706   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:06:31.219438   45068 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:06:31.219765   45068 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:06:31.219790   45068 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:06:31.219948   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:06:31.220109   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:06:31.220292   45068 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:06:31.220433   45068 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:06:31.310707   45068 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5476/cgroup
	W1104 11:06:31.319561   45068 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5476/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:06:31.319606   45068 ssh_runner.go:195] Run: ls
	I1104 11:06:31.323467   45068 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I1104 11:06:31.327274   45068 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I1104 11:06:31.327332   45068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl drain ha-931571-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I1104 11:06:34.469770   45068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl drain ha-931571-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.142402377s)
	I1104 11:06:34.469819   45068 node.go:128] successfully drained node "ha-931571-m03"
	I1104 11:06:34.469878   45068 ssh_runner.go:195] Run: systemctl --version
	I1104 11:06:34.469903   45068 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 11:06:34.472996   45068 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 11:06:34.473482   45068 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:05:31 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 11:06:34.473509   45068 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 11:06:34.473694   45068 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 11:06:34.473892   45068 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 11:06:34.474058   45068 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 11:06:34.474193   45068 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 11:06:34.551671   45068 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/crio/crio.sock"
	I1104 11:06:45.624245   45068 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/crio/crio.sock": (11.072539069s)
	I1104 11:06:45.624280   45068 node.go:155] successfully reset node "ha-931571-m03"
	I1104 11:06:45.624811   45068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:06:45.645488   45068 kapi.go:59] client config for ha-931571: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 11:06:45.645927   45068 cert_rotation.go:140] Starting client certificate rotation controller
	I1104 11:06:45.646222   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:06:45.646237   45068 round_trippers.go:469] Request Headers:
	I1104 11:06:45.646245   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:06:45.646249   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:06:45.646251   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:06:45.646703   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:06:45.646765   45068 retry.go:31] will retry after 461.543595ms: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:06:46.109370   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:06:46.109394   45068 round_trippers.go:469] Request Headers:
	I1104 11:06:46.109402   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:06:46.109406   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:06:46.109409   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:06:46.109890   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:06:46.109942   45068 retry.go:31] will retry after 941.93794ms: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:06:47.052063   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:06:47.052082   45068 round_trippers.go:469] Request Headers:
	I1104 11:06:47.052090   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:06:47.052093   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:06:47.052096   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:06:47.052557   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:06:47.052603   45068 retry.go:31] will retry after 623.280347ms: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:06:47.676216   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:06:47.676239   45068 round_trippers.go:469] Request Headers:
	I1104 11:06:47.676247   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:06:47.676252   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:06:47.676255   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:06:47.676726   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:06:47.676781   45068 retry.go:31] will retry after 1.544884427s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:06:49.222447   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:06:49.222468   45068 round_trippers.go:469] Request Headers:
	I1104 11:06:49.222477   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:06:49.222480   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:06:49.222483   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:06:49.222947   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:06:49.223013   45068 retry.go:31] will retry after 2.661295676s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:06:51.885380   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:06:51.885401   45068 round_trippers.go:469] Request Headers:
	I1104 11:06:51.885409   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:06:51.885412   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:06:51.885414   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:06:51.885844   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:06:51.885883   45068 retry.go:31] will retry after 3.891094495s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:06:55.778015   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:06:55.778037   45068 round_trippers.go:469] Request Headers:
	I1104 11:06:55.778049   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:06:55.778054   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:06:55.778058   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:06:55.778597   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:06:55.778651   45068 retry.go:31] will retry after 5.144826578s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:07:00.923835   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:07:00.923853   45068 round_trippers.go:469] Request Headers:
	I1104 11:07:00.923862   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:07:00.923881   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:07:00.923885   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:07:00.924420   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:07:00.924463   45068 retry.go:31] will retry after 9.432525542s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:07:10.360703   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:07:10.360727   45068 round_trippers.go:469] Request Headers:
	I1104 11:07:10.360735   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:07:10.360739   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:07:10.360742   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:07:10.361209   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:07:10.361278   45068 retry.go:31] will retry after 9.247471258s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:07:19.609399   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:07:19.609420   45068 round_trippers.go:469] Request Headers:
	I1104 11:07:19.609435   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:07:19.609440   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:07:19.609444   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:07:19.609924   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:07:19.609982   45068 retry.go:31] will retry after 9.924202501s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:07:29.535351   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:07:29.535378   45068 round_trippers.go:469] Request Headers:
	I1104 11:07:29.535388   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:07:29.535393   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:07:29.535396   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:07:29.535866   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:07:29.535914   45068 retry.go:31] will retry after 32.795233721s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:02.331973   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:08:02.331994   45068 round_trippers.go:469] Request Headers:
	I1104 11:08:02.332002   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:08:02.332006   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:08:02.332010   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:08:02.332531   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	I1104 11:08:02.332597   45068 retry.go:31] will retry after 23.270596931s: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:25.605402   45068 round_trippers.go:463] DELETE https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03
	I1104 11:08:25.605424   45068 round_trippers.go:469] Request Headers:
	I1104 11:08:25.605432   45068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1104 11:08:25.605437   45068 round_trippers.go:473]     Accept: application/json, */*
	I1104 11:08:25.605443   45068 round_trippers.go:473]     Content-Type: application/json
	I1104 11:08:25.605947   45068 round_trippers.go:574] Response Status:  in 0 milliseconds
	E1104 11:08:25.606006   45068 node.go:177] kubectl delete node "ha-931571-m03" failed: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:25.608116   45068 out.go:201] 
	W1104 11:08:25.609376   45068 out.go:270] X Exiting due to GUEST_NODE_DELETE: deleting node: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	X Exiting due to GUEST_NODE_DELETE: deleting node: Delete "https://192.168.39.254:8443/api/v1/nodes/ha-931571-m03": dial tcp 192.168.39.254:8443: connect: connection refused
	W1104 11:08:25.609394   45068 out.go:270] * 
	* 
	W1104 11:08:25.611696   45068 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 11:08:25.613117   45068 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-931571 node delete m03 -v=7 --alsologtostderr": exit status 80
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr: exit status 2 (28.988338812s)

                                                
                                                
-- stdout --
	ha-931571
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-931571-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-931571-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-931571-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:08:25.662250   45591 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:08:25.662369   45591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:08:25.662379   45591 out.go:358] Setting ErrFile to fd 2...
	I1104 11:08:25.662383   45591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:08:25.662552   45591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:08:25.662701   45591 out.go:352] Setting JSON to false
	I1104 11:08:25.662725   45591 mustload.go:65] Loading cluster: ha-931571
	I1104 11:08:25.662788   45591 notify.go:220] Checking for updates...
	I1104 11:08:25.663294   45591 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:08:25.663322   45591 status.go:174] checking status of ha-931571 ...
	I1104 11:08:25.663856   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:25.663912   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:25.688179   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44675
	I1104 11:08:25.688724   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:25.689302   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:25.689335   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:25.689855   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:25.690049   45591 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:08:25.692374   45591 status.go:371] ha-931571 host status = "Running" (err=<nil>)
	I1104 11:08:25.692390   45591 host.go:66] Checking if "ha-931571" exists ...
	I1104 11:08:25.692684   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:25.692726   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:25.708320   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I1104 11:08:25.708855   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:25.709362   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:25.709385   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:25.709724   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:25.709919   45591 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:08:25.712976   45591 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:08:25.713565   45591 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:08:25.713591   45591 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:08:25.713722   45591 host.go:66] Checking if "ha-931571" exists ...
	I1104 11:08:25.715599   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:25.715649   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:25.731559   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1104 11:08:25.731958   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:25.732524   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:25.732553   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:25.732908   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:25.733075   45591 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:08:25.733277   45591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1104 11:08:25.733315   45591 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:08:25.735735   45591 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:08:25.736233   45591 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:08:25.736252   45591 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:08:25.736419   45591 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:08:25.736583   45591 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:08:25.736733   45591 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:08:25.736846   45591 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:08:25.818014   45591 ssh_runner.go:195] Run: systemctl --version
	I1104 11:08:25.827151   45591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:08:25.842565   45591 kubeconfig.go:125] found "ha-931571" server: "https://192.168.39.254:8443"
	I1104 11:08:25.842593   45591 api_server.go:166] Checking apiserver status ...
	I1104 11:08:25.842623   45591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:08:25.857146   45591 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5476/cgroup
	W1104 11:08:25.866640   45591 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5476/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:08:25.866690   45591 ssh_runner.go:195] Run: ls
	I1104 11:08:25.871068   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:25.871629   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:25.871669   45591 retry.go:31] will retry after 255.606059ms: state is "Stopped"
	I1104 11:08:26.128101   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:26.128732   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:26.128767   45591 retry.go:31] will retry after 379.406925ms: state is "Stopped"
	I1104 11:08:26.508297   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:26.508924   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:26.508961   45591 retry.go:31] will retry after 432.099682ms: state is "Stopped"
	I1104 11:08:26.941304   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:26.941922   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:26.941957   45591 retry.go:31] will retry after 406.994006ms: state is "Stopped"
	I1104 11:08:27.349566   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:27.350149   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:27.350189   45591 retry.go:31] will retry after 473.146889ms: state is "Stopped"
	I1104 11:08:27.823774   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:27.824405   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:27.824450   45591 retry.go:31] will retry after 845.181342ms: state is "Stopped"
	I1104 11:08:28.670413   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:28.671100   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:28.671136   45591 retry.go:31] will retry after 1.128993074s: state is "Stopped"
	I1104 11:08:29.800295   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:29.800933   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:29.800968   45591 retry.go:31] will retry after 1.314626893s: state is "Stopped"
	I1104 11:08:31.115905   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:31.116624   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:31.116664   45591 retry.go:31] will retry after 1.526439247s: state is "Stopped"
	I1104 11:08:32.644319   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:32.644926   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:32.644968   45591 retry.go:31] will retry after 1.801264128s: state is "Stopped"
	I1104 11:08:34.446986   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:34.447574   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:34.447612   45591 retry.go:31] will retry after 2.797184413s: state is "Stopped"
	I1104 11:08:37.245317   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:37.245883   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:37.245922   45591 retry.go:31] will retry after 2.969003151s: state is "Stopped"
	I1104 11:08:40.215092   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:40.215852   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:40.215892   45591 status.go:463] ha-931571 apiserver status = Running (err=<nil>)
	I1104 11:08:40.215898   45591 status.go:176] ha-931571 status: &{Name:ha-931571 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1104 11:08:40.215914   45591 status.go:174] checking status of ha-931571-m02 ...
	I1104 11:08:40.216329   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:40.216381   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:40.231669   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I1104 11:08:40.232190   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:40.232707   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:40.232734   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:40.233102   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:40.233302   45591 main.go:141] libmachine: (ha-931571-m02) Calling .GetState
	I1104 11:08:40.234812   45591 status.go:371] ha-931571-m02 host status = "Running" (err=<nil>)
	I1104 11:08:40.234830   45591 host.go:66] Checking if "ha-931571-m02" exists ...
	I1104 11:08:40.235150   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:40.235191   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:40.249720   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37133
	I1104 11:08:40.250170   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:40.250702   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:40.250723   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:40.250996   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:40.251163   45591 main.go:141] libmachine: (ha-931571-m02) Calling .GetIP
	I1104 11:08:40.255544   45591 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 11:08:40.256017   45591 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:03:30 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 11:08:40.256062   45591 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 11:08:40.256217   45591 host.go:66] Checking if "ha-931571-m02" exists ...
	I1104 11:08:40.256491   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:40.256529   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:40.272454   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I1104 11:08:40.272970   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:40.273471   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:40.273496   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:40.273852   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:40.274021   45591 main.go:141] libmachine: (ha-931571-m02) Calling .DriverName
	I1104 11:08:40.274183   45591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1104 11:08:40.274203   45591 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHHostname
	I1104 11:08:40.276740   45591 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 11:08:40.277129   45591 main.go:141] libmachine: (ha-931571-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:86:6b", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:03:30 +0000 UTC Type:0 Mac:52:54:00:5c:86:6b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-931571-m02 Clientid:01:52:54:00:5c:86:6b}
	I1104 11:08:40.277152   45591 main.go:141] libmachine: (ha-931571-m02) DBG | domain ha-931571-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:5c:86:6b in network mk-ha-931571
	I1104 11:08:40.277287   45591 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHPort
	I1104 11:08:40.277426   45591 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHKeyPath
	I1104 11:08:40.277580   45591 main.go:141] libmachine: (ha-931571-m02) Calling .GetSSHUsername
	I1104 11:08:40.277693   45591 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m02/id_rsa Username:docker}
	I1104 11:08:40.356062   45591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:08:40.369900   45591 kubeconfig.go:125] found "ha-931571" server: "https://192.168.39.254:8443"
	I1104 11:08:40.369927   45591 api_server.go:166] Checking apiserver status ...
	I1104 11:08:40.369959   45591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:08:40.383369   45591 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1408/cgroup
	W1104 11:08:40.392070   45591 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1408/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:08:40.392118   45591 ssh_runner.go:195] Run: ls
	I1104 11:08:40.396064   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:40.396675   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:40.396708   45591 retry.go:31] will retry after 309.693257ms: state is "Stopped"
	I1104 11:08:40.706461   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:40.707062   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:40.707097   45591 retry.go:31] will retry after 361.761392ms: state is "Stopped"
	I1104 11:08:41.069054   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:41.069707   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:41.069752   45591 retry.go:31] will retry after 315.630101ms: state is "Stopped"
	I1104 11:08:41.386250   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:41.386886   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:41.386924   45591 retry.go:31] will retry after 388.427782ms: state is "Stopped"
	I1104 11:08:41.775423   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:41.776068   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:41.776107   45591 retry.go:31] will retry after 629.520589ms: state is "Stopped"
	I1104 11:08:42.406370   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:42.407022   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:42.407059   45591 retry.go:31] will retry after 665.078062ms: state is "Stopped"
	I1104 11:08:43.072826   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:43.073467   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:43.073503   45591 retry.go:31] will retry after 1.132966525s: state is "Stopped"
	I1104 11:08:44.206747   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:44.207386   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:44.207432   45591 retry.go:31] will retry after 1.259616876s: state is "Stopped"
	I1104 11:08:45.468031   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:45.468680   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:45.468723   45591 retry.go:31] will retry after 1.815957085s: state is "Stopped"
	I1104 11:08:47.285305   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:47.286024   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:47.286067   45591 retry.go:31] will retry after 1.886775191s: state is "Stopped"
	I1104 11:08:49.173387   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:49.174007   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:49.174044   45591 retry.go:31] will retry after 2.871744857s: state is "Stopped"
	I1104 11:08:52.046784   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:52.047382   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:52.047426   45591 retry.go:31] will retry after 2.232760807s: state is "Stopped"
	I1104 11:08:54.280854   45591 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1104 11:08:54.281600   45591 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: connection refused
	I1104 11:08:54.281641   45591 status.go:463] ha-931571-m02 apiserver status = Running (err=<nil>)
	I1104 11:08:54.281648   45591 status.go:176] ha-931571-m02 status: &{Name:ha-931571-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1104 11:08:54.281664   45591 status.go:174] checking status of ha-931571-m03 ...
	I1104 11:08:54.281935   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:54.281967   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:54.296802   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I1104 11:08:54.297252   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:54.297755   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:54.297772   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:54.298047   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:54.298246   45591 main.go:141] libmachine: (ha-931571-m03) Calling .GetState
	I1104 11:08:54.299926   45591 status.go:371] ha-931571-m03 host status = "Running" (err=<nil>)
	I1104 11:08:54.299942   45591 host.go:66] Checking if "ha-931571-m03" exists ...
	I1104 11:08:54.300356   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:54.300399   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:54.314915   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I1104 11:08:54.315422   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:54.315917   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:54.315937   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:54.316257   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:54.316434   45591 main.go:141] libmachine: (ha-931571-m03) Calling .GetIP
	I1104 11:08:54.318909   45591 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 11:08:54.319361   45591 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:05:31 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 11:08:54.319386   45591 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 11:08:54.319579   45591 host.go:66] Checking if "ha-931571-m03" exists ...
	I1104 11:08:54.319866   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:54.319914   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:54.334331   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I1104 11:08:54.334724   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:54.335117   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:54.335135   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:54.335438   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:54.335602   45591 main.go:141] libmachine: (ha-931571-m03) Calling .DriverName
	I1104 11:08:54.335765   45591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1104 11:08:54.335785   45591 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHHostname
	I1104 11:08:54.338504   45591 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 11:08:54.338880   45591 main.go:141] libmachine: (ha-931571-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f5:de", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:05:31 +0000 UTC Type:0 Mac:52:54:00:30:f5:de Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-931571-m03 Clientid:01:52:54:00:30:f5:de}
	I1104 11:08:54.338909   45591 main.go:141] libmachine: (ha-931571-m03) DBG | domain ha-931571-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:30:f5:de in network mk-ha-931571
	I1104 11:08:54.339082   45591 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHPort
	I1104 11:08:54.339257   45591 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHKeyPath
	I1104 11:08:54.339405   45591 main.go:141] libmachine: (ha-931571-m03) Calling .GetSSHUsername
	I1104 11:08:54.339553   45591 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m03/id_rsa Username:docker}
	I1104 11:08:54.419982   45591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:08:54.434845   45591 kubeconfig.go:125] found "ha-931571" server: "https://192.168.39.254:8443"
	I1104 11:08:54.434868   45591 api_server.go:166] Checking apiserver status ...
	I1104 11:08:54.434896   45591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1104 11:08:54.446801   45591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:08:54.446821   45591 status.go:463] ha-931571-m03 apiserver status = Stopped (err=<nil>)
	I1104 11:08:54.446828   45591 status.go:176] ha-931571-m03 status: &{Name:ha-931571-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1104 11:08:54.446841   45591 status.go:174] checking status of ha-931571-m04 ...
	I1104 11:08:54.447130   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:54.447162   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:54.462548   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I1104 11:08:54.462909   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:54.463327   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:54.463345   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:54.463655   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:54.463824   45591 main.go:141] libmachine: (ha-931571-m04) Calling .GetState
	I1104 11:08:54.465418   45591 status.go:371] ha-931571-m04 host status = "Running" (err=<nil>)
	I1104 11:08:54.465434   45591 host.go:66] Checking if "ha-931571-m04" exists ...
	I1104 11:08:54.465754   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:54.465795   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:54.480868   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I1104 11:08:54.481350   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:54.481808   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:54.481829   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:54.482190   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:54.482389   45591 main.go:141] libmachine: (ha-931571-m04) Calling .GetIP
	I1104 11:08:54.485349   45591 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 11:08:54.485758   45591 main.go:141] libmachine: (ha-931571-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:27:aa", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:06:15 +0000 UTC Type:0 Mac:52:54:00:16:27:aa Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-931571-m04 Clientid:01:52:54:00:16:27:aa}
	I1104 11:08:54.485782   45591 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 11:08:54.485941   45591 host.go:66] Checking if "ha-931571-m04" exists ...
	I1104 11:08:54.486216   45591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:08:54.486249   45591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:08:54.500916   45591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I1104 11:08:54.501374   45591 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:08:54.501844   45591 main.go:141] libmachine: Using API Version  1
	I1104 11:08:54.501866   45591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:08:54.502179   45591 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:08:54.502366   45591 main.go:141] libmachine: (ha-931571-m04) Calling .DriverName
	I1104 11:08:54.502586   45591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1104 11:08:54.502614   45591 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHHostname
	I1104 11:08:54.505638   45591 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 11:08:54.506057   45591 main.go:141] libmachine: (ha-931571-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:27:aa", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:06:15 +0000 UTC Type:0 Mac:52:54:00:16:27:aa Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-931571-m04 Clientid:01:52:54:00:16:27:aa}
	I1104 11:08:54.506074   45591 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 11:08:54.506281   45591 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHPort
	I1104 11:08:54.506438   45591 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHKeyPath
	I1104 11:08:54.506567   45591 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHUsername
	I1104 11:08:54.506701   45591 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m04/id_rsa Username:docker}
	I1104 11:08:54.588192   45591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:08:54.602498   45591 status.go:176] ha-931571-m04 status: &{Name:ha-931571-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571: exit status 2 (13.257039563s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.89551874s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-931571 node start m02 -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571 -v=7                                                           | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-931571 -v=7                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-931571 --wait=true -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:01 UTC | 04 Nov 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	| node    | ha-931571 node delete m03 -v=7                                                   | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:01:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:01:36.135689   43487 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:01:36.135831   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.135841   43487 out.go:358] Setting ErrFile to fd 2...
	I1104 11:01:36.135848   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.136026   43487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:01:36.136622   43487 out.go:352] Setting JSON to false
	I1104 11:01:36.137570   43487 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6247,"bootTime":1730711849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:01:36.137665   43487 start.go:139] virtualization: kvm guest
	I1104 11:01:36.140736   43487 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:01:36.142255   43487 notify.go:220] Checking for updates...
	I1104 11:01:36.142280   43487 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:01:36.143792   43487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:01:36.145520   43487 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:01:36.147024   43487 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:01:36.148374   43487 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:01:36.150002   43487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:01:36.151746   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.151854   43487 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:01:36.152270   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.152323   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.167782   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1104 11:01:36.168314   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.168871   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.168896   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.169315   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.169538   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.206070   43487 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:01:36.207361   43487 start.go:297] selected driver: kvm2
	I1104 11:01:36.207389   43487 start.go:901] validating driver "kvm2" against &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.207518   43487 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:01:36.207957   43487 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.208077   43487 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:01:36.225111   43487 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:01:36.225913   43487 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:01:36.225946   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:01:36.225978   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:01:36.226027   43487 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.226141   43487 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.228347   43487 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 11:01:36.229829   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:01:36.229870   43487 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:01:36.229878   43487 cache.go:56] Caching tarball of preloaded images
	I1104 11:01:36.229952   43487 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:01:36.229964   43487 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:01:36.230064   43487 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 11:01:36.230320   43487 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:01:36.230371   43487 start.go:364] duration metric: took 27.926µs to acquireMachinesLock for "ha-931571"
	I1104 11:01:36.230386   43487 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:01:36.230395   43487 fix.go:54] fixHost starting: 
	I1104 11:01:36.230733   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.230769   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.245984   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I1104 11:01:36.246433   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.246934   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.246955   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.247232   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.247395   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.247568   43487 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:01:36.249147   43487 fix.go:112] recreateIfNeeded on ha-931571: state=Running err=<nil>
	W1104 11:01:36.249199   43487 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:01:36.251132   43487 out.go:177] * Updating the running kvm2 "ha-931571" VM ...
	I1104 11:01:36.252516   43487 machine.go:93] provisionDockerMachine start ...
	I1104 11:01:36.252546   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.252780   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.255202   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255594   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.255616   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255731   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.255890   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256009   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256140   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.256308   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.256489   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.256500   43487 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:01:36.361800   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.361835   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362053   43487 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 11:01:36.362076   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362273   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.365086   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365550   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.365581   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365735   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.365939   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366072   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366277   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.366448   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.366691   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.366706   43487 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 11:01:36.493768   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.493790   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.496511   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.496961   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.496984   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.497265   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.497539   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497705   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497875   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.498037   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.498202   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.498219   43487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:01:36.610606   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:01:36.610641   43487 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:01:36.610661   43487 buildroot.go:174] setting up certificates
	I1104 11:01:36.610669   43487 provision.go:84] configureAuth start
	I1104 11:01:36.610679   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.610955   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:01:36.613714   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614200   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.614230   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614349   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.616882   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617334   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.617361   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617589   43487 provision.go:143] copyHostCerts
	I1104 11:01:36.617626   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617677   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:01:36.617689   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617752   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:01:36.617831   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617850   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:01:36.617854   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617877   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:01:36.617923   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617936   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:01:36.617943   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617965   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:01:36.618012   43487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 11:01:36.828436   43487 provision.go:177] copyRemoteCerts
	I1104 11:01:36.828491   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:01:36.828512   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.830991   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831347   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.831368   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831530   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.831721   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.831867   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.831960   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:01:36.915764   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:01:36.915847   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:01:36.939587   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:01:36.939667   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 11:01:36.963061   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:01:36.963124   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 11:01:36.986150   43487 provision.go:87] duration metric: took 375.467362ms to configureAuth
	I1104 11:01:36.986177   43487 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:01:36.986415   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.986508   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.988810   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989158   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.989186   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989401   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.989591   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989752   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989860   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.989990   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.990180   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.990196   43487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:03:07.637315   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:03:07.637355   43487 machine.go:96] duration metric: took 1m31.384824491s to provisionDockerMachine
	I1104 11:03:07.637369   43487 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 11:03:07.637384   43487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:03:07.637404   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.637761   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:03:07.637793   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.640901   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641365   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.641386   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641580   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.641782   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.641937   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.642057   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.723354   43487 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:03:07.727749   43487 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:03:07.727790   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:03:07.727866   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:03:07.727978   43487 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:03:07.727992   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:03:07.728104   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:03:07.737590   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:07.760844   43487 start.go:296] duration metric: took 123.46114ms for postStartSetup
	I1104 11:03:07.760883   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.761154   43487 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1104 11:03:07.761179   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.763801   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764219   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.764250   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764422   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.764610   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.764765   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.764923   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	W1104 11:03:07.847152   43487 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1104 11:03:07.847183   43487 fix.go:56] duration metric: took 1m31.616787199s for fixHost
	I1104 11:03:07.847210   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.849780   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850080   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.850103   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850285   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.850444   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850572   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850663   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.850778   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:03:07.850921   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:03:07.850932   43487 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:03:07.957716   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730718187.926175534
	
	I1104 11:03:07.957740   43487 fix.go:216] guest clock: 1730718187.926175534
	I1104 11:03:07.957749   43487 fix.go:229] Guest: 2024-11-04 11:03:07.926175534 +0000 UTC Remote: 2024-11-04 11:03:07.847191367 +0000 UTC m=+91.749611169 (delta=78.984167ms)
	I1104 11:03:07.957775   43487 fix.go:200] guest clock delta is within tolerance: 78.984167ms
	I1104 11:03:07.957780   43487 start.go:83] releasing machines lock for "ha-931571", held for 1m31.727399754s
	I1104 11:03:07.957797   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.958011   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:07.960277   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960596   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.960623   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960746   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961392   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961589   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961682   43487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:03:07.961744   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.961789   43487 ssh_runner.go:195] Run: cat /version.json
	I1104 11:03:07.961812   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.964564   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964779   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964935   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.964958   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965102   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.965115   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965127   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965307   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965321   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965465   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965475   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965612   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965607   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.965735   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:08.062470   43487 ssh_runner.go:195] Run: systemctl --version
	I1104 11:03:08.068016   43487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:03:08.217034   43487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:03:08.225627   43487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:03:08.225681   43487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:03:08.234588   43487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:03:08.234609   43487 start.go:495] detecting cgroup driver to use...
	I1104 11:03:08.234668   43487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:03:08.250011   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:03:08.263678   43487 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:03:08.263727   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:03:08.276778   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:03:08.289631   43487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:03:08.436219   43487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:03:08.580306   43487 docker.go:233] disabling docker service ...
	I1104 11:03:08.580381   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:03:08.598849   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:03:08.611846   43487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:03:08.752818   43487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:03:08.900497   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:03:08.913868   43487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:03:08.931418   43487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:03:08.931481   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.942464   43487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:03:08.942519   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.952702   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.963648   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.973838   43487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:03:08.984434   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.995143   43487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.005343   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.015650   43487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:03:09.024728   43487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:03:09.034012   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:09.180518   43487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:03:19.158217   43487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.977660206s)
	I1104 11:03:19.158256   43487 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:03:19.158312   43487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:03:19.163030   43487 start.go:563] Will wait 60s for crictl version
	I1104 11:03:19.163087   43487 ssh_runner.go:195] Run: which crictl
	I1104 11:03:19.166614   43487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:03:19.198130   43487 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:03:19.198200   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.225725   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.256273   43487 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:03:19.257947   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:19.260526   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.260966   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:19.260989   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.261303   43487 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:03:19.265771   43487 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:03:19.265898   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:03:19.265937   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.311790   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.311812   43487 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:03:19.311863   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.345725   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.345751   43487 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:03:19.345760   43487 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 11:03:19.345861   43487 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:03:19.345923   43487 ssh_runner.go:195] Run: crio config
	I1104 11:03:19.399886   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:03:19.399909   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:03:19.399922   43487 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:03:19.399956   43487 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:03:19.400106   43487 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:03:19.400126   43487 kube-vip.go:115] generating kube-vip config ...
	I1104 11:03:19.400180   43487 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 11:03:19.411359   43487 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 11:03:19.411489   43487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 11:03:19.411549   43487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:03:19.420430   43487 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:03:19.420500   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 11:03:19.429659   43487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 11:03:19.445912   43487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:03:19.461851   43487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 11:03:19.478119   43487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 11:03:19.494678   43487 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 11:03:19.499089   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:19.639880   43487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:03:19.653539   43487 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 11:03:19.653562   43487 certs.go:194] generating shared ca certs ...
	I1104 11:03:19.653579   43487 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.653721   43487 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:03:19.653775   43487 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:03:19.653788   43487 certs.go:256] generating profile certs ...
	I1104 11:03:19.653877   43487 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 11:03:19.653912   43487 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0
	I1104 11:03:19.653933   43487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 11:03:19.885027   43487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 ...
	I1104 11:03:19.885059   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0: {Name:mk69f57313434af2e91ed33999be6969db1655d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885262   43487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 ...
	I1104 11:03:19.885278   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0: {Name:mk036af60f5877bd7b54bd0649ec2229ae064452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885373   43487 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 11:03:19.885549   43487 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 11:03:19.885706   43487 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 11:03:19.885722   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:03:19.885740   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:03:19.885756   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:03:19.885778   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:03:19.885796   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:03:19.885822   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:03:19.885840   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:03:19.885858   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:03:19.885925   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:03:19.885964   43487 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:03:19.885979   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:03:19.886014   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:03:19.886046   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:03:19.886078   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:03:19.886131   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:19.886172   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:03:19.886192   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:19.886211   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:03:19.886765   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:03:19.911799   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:03:19.934788   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:03:19.958214   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:03:19.982177   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 11:03:20.006762   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:03:20.032203   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:03:20.056775   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:03:20.081730   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:03:20.107796   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:03:20.132535   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:03:20.157941   43487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:03:20.175127   43487 ssh_runner.go:195] Run: openssl version
	I1104 11:03:20.180513   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:03:20.192022   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196624   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196676   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.202253   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:03:20.212464   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:03:20.224808   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229606   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229653   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.235319   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:03:20.246230   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:03:20.258556   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263132   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263190   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.268984   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:03:20.279291   43487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:03:20.283601   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:03:20.288948   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:03:20.294474   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:03:20.299807   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:03:20.305182   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:03:20.310975   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:03:20.316451   43487 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:03:20.316562   43487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:03:20.316594   43487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:03:20.354362   43487 cri.go:89] found id: "4c3aa3719ea407f31bd76c40125ab3b7bdd92ee408b1f5e698e57298fb7c8bf5"
	I1104 11:03:20.354384   43487 cri.go:89] found id: "b93e0586789e3f2dc0a6a83e13dc87e97cd99bac979bcedff72518a08f43e152"
	I1104 11:03:20.354387   43487 cri.go:89] found id: "801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	I1104 11:03:20.354390   43487 cri.go:89] found id: "400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457"
	I1104 11:03:20.354393   43487 cri.go:89] found id: "49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c"
	I1104 11:03:20.354395   43487 cri.go:89] found id: "f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c"
	I1104 11:03:20.354402   43487 cri.go:89] found id: "4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0"
	I1104 11:03:20.354404   43487 cri.go:89] found id: "6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8"
	I1104 11:03:20.354408   43487 cri.go:89] found id: "e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c"
	I1104 11:03:20.354425   43487 cri.go:89] found id: "4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc"
	I1104 11:03:20.354434   43487 cri.go:89] found id: "82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150"
	I1104 11:03:20.354436   43487 cri.go:89] found id: "f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c"
	I1104 11:03:20.354439   43487 cri.go:89] found id: ""
	I1104 11:03:20.354477   43487 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571: exit status 2 (14.273044696s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-931571" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (173.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (57.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1104 11:09:47.409480   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (28.404751787s)
ha_test.go:415: expected profile "ha-931571" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-931571\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-931571\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-931571\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.67\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.245\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.57\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.237\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevir
t\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\"
,\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571: exit status 2 (13.425715749s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.957730998s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-931571 node start m02 -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571 -v=7                                                           | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-931571 -v=7                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-931571 --wait=true -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:01 UTC | 04 Nov 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	| node    | ha-931571 node delete m03 -v=7                                                   | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:01:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:01:36.135689   43487 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:01:36.135831   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.135841   43487 out.go:358] Setting ErrFile to fd 2...
	I1104 11:01:36.135848   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.136026   43487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:01:36.136622   43487 out.go:352] Setting JSON to false
	I1104 11:01:36.137570   43487 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6247,"bootTime":1730711849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:01:36.137665   43487 start.go:139] virtualization: kvm guest
	I1104 11:01:36.140736   43487 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:01:36.142255   43487 notify.go:220] Checking for updates...
	I1104 11:01:36.142280   43487 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:01:36.143792   43487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:01:36.145520   43487 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:01:36.147024   43487 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:01:36.148374   43487 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:01:36.150002   43487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:01:36.151746   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.151854   43487 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:01:36.152270   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.152323   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.167782   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1104 11:01:36.168314   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.168871   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.168896   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.169315   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.169538   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.206070   43487 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:01:36.207361   43487 start.go:297] selected driver: kvm2
	I1104 11:01:36.207389   43487 start.go:901] validating driver "kvm2" against &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.207518   43487 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:01:36.207957   43487 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.208077   43487 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:01:36.225111   43487 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:01:36.225913   43487 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:01:36.225946   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:01:36.225978   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:01:36.226027   43487 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.226141   43487 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.228347   43487 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 11:01:36.229829   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:01:36.229870   43487 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:01:36.229878   43487 cache.go:56] Caching tarball of preloaded images
	I1104 11:01:36.229952   43487 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:01:36.229964   43487 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:01:36.230064   43487 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 11:01:36.230320   43487 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:01:36.230371   43487 start.go:364] duration metric: took 27.926µs to acquireMachinesLock for "ha-931571"
	I1104 11:01:36.230386   43487 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:01:36.230395   43487 fix.go:54] fixHost starting: 
	I1104 11:01:36.230733   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.230769   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.245984   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I1104 11:01:36.246433   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.246934   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.246955   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.247232   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.247395   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.247568   43487 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:01:36.249147   43487 fix.go:112] recreateIfNeeded on ha-931571: state=Running err=<nil>
	W1104 11:01:36.249199   43487 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:01:36.251132   43487 out.go:177] * Updating the running kvm2 "ha-931571" VM ...
	I1104 11:01:36.252516   43487 machine.go:93] provisionDockerMachine start ...
	I1104 11:01:36.252546   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.252780   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.255202   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255594   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.255616   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255731   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.255890   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256009   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256140   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.256308   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.256489   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.256500   43487 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:01:36.361800   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.361835   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362053   43487 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 11:01:36.362076   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362273   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.365086   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365550   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.365581   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365735   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.365939   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366072   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366277   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.366448   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.366691   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.366706   43487 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 11:01:36.493768   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.493790   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.496511   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.496961   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.496984   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.497265   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.497539   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497705   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497875   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.498037   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.498202   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.498219   43487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:01:36.610606   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:01:36.610641   43487 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:01:36.610661   43487 buildroot.go:174] setting up certificates
	I1104 11:01:36.610669   43487 provision.go:84] configureAuth start
	I1104 11:01:36.610679   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.610955   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:01:36.613714   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614200   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.614230   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614349   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.616882   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617334   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.617361   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617589   43487 provision.go:143] copyHostCerts
	I1104 11:01:36.617626   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617677   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:01:36.617689   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617752   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:01:36.617831   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617850   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:01:36.617854   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617877   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:01:36.617923   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617936   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:01:36.617943   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617965   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:01:36.618012   43487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 11:01:36.828436   43487 provision.go:177] copyRemoteCerts
	I1104 11:01:36.828491   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:01:36.828512   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.830991   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831347   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.831368   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831530   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.831721   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.831867   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.831960   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:01:36.915764   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:01:36.915847   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:01:36.939587   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:01:36.939667   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 11:01:36.963061   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:01:36.963124   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 11:01:36.986150   43487 provision.go:87] duration metric: took 375.467362ms to configureAuth
	I1104 11:01:36.986177   43487 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:01:36.986415   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.986508   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.988810   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989158   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.989186   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989401   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.989591   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989752   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989860   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.989990   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.990180   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.990196   43487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:03:07.637315   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:03:07.637355   43487 machine.go:96] duration metric: took 1m31.384824491s to provisionDockerMachine
	I1104 11:03:07.637369   43487 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 11:03:07.637384   43487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:03:07.637404   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.637761   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:03:07.637793   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.640901   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641365   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.641386   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641580   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.641782   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.641937   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.642057   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.723354   43487 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:03:07.727749   43487 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:03:07.727790   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:03:07.727866   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:03:07.727978   43487 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:03:07.727992   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:03:07.728104   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:03:07.737590   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:07.760844   43487 start.go:296] duration metric: took 123.46114ms for postStartSetup
	I1104 11:03:07.760883   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.761154   43487 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1104 11:03:07.761179   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.763801   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764219   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.764250   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764422   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.764610   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.764765   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.764923   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	W1104 11:03:07.847152   43487 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1104 11:03:07.847183   43487 fix.go:56] duration metric: took 1m31.616787199s for fixHost
	I1104 11:03:07.847210   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.849780   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850080   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.850103   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850285   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.850444   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850572   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850663   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.850778   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:03:07.850921   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:03:07.850932   43487 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:03:07.957716   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730718187.926175534
	
	I1104 11:03:07.957740   43487 fix.go:216] guest clock: 1730718187.926175534
	I1104 11:03:07.957749   43487 fix.go:229] Guest: 2024-11-04 11:03:07.926175534 +0000 UTC Remote: 2024-11-04 11:03:07.847191367 +0000 UTC m=+91.749611169 (delta=78.984167ms)
	I1104 11:03:07.957775   43487 fix.go:200] guest clock delta is within tolerance: 78.984167ms
	I1104 11:03:07.957780   43487 start.go:83] releasing machines lock for "ha-931571", held for 1m31.727399754s
	I1104 11:03:07.957797   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.958011   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:07.960277   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960596   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.960623   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960746   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961392   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961589   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961682   43487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:03:07.961744   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.961789   43487 ssh_runner.go:195] Run: cat /version.json
	I1104 11:03:07.961812   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.964564   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964779   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964935   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.964958   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965102   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.965115   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965127   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965307   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965321   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965465   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965475   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965612   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965607   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.965735   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:08.062470   43487 ssh_runner.go:195] Run: systemctl --version
	I1104 11:03:08.068016   43487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:03:08.217034   43487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:03:08.225627   43487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:03:08.225681   43487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:03:08.234588   43487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:03:08.234609   43487 start.go:495] detecting cgroup driver to use...
	I1104 11:03:08.234668   43487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:03:08.250011   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:03:08.263678   43487 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:03:08.263727   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:03:08.276778   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:03:08.289631   43487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:03:08.436219   43487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:03:08.580306   43487 docker.go:233] disabling docker service ...
	I1104 11:03:08.580381   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:03:08.598849   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:03:08.611846   43487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:03:08.752818   43487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:03:08.900497   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:03:08.913868   43487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:03:08.931418   43487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:03:08.931481   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.942464   43487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:03:08.942519   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.952702   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.963648   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.973838   43487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:03:08.984434   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.995143   43487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.005343   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.015650   43487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:03:09.024728   43487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:03:09.034012   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:09.180518   43487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:03:19.158217   43487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.977660206s)
	I1104 11:03:19.158256   43487 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:03:19.158312   43487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:03:19.163030   43487 start.go:563] Will wait 60s for crictl version
	I1104 11:03:19.163087   43487 ssh_runner.go:195] Run: which crictl
	I1104 11:03:19.166614   43487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:03:19.198130   43487 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:03:19.198200   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.225725   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.256273   43487 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:03:19.257947   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:19.260526   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.260966   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:19.260989   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.261303   43487 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:03:19.265771   43487 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:03:19.265898   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:03:19.265937   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.311790   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.311812   43487 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:03:19.311863   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.345725   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.345751   43487 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:03:19.345760   43487 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 11:03:19.345861   43487 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:03:19.345923   43487 ssh_runner.go:195] Run: crio config
	I1104 11:03:19.399886   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:03:19.399909   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:03:19.399922   43487 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:03:19.399956   43487 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:03:19.400106   43487 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:03:19.400126   43487 kube-vip.go:115] generating kube-vip config ...
	I1104 11:03:19.400180   43487 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 11:03:19.411359   43487 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 11:03:19.411489   43487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 11:03:19.411549   43487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:03:19.420430   43487 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:03:19.420500   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 11:03:19.429659   43487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 11:03:19.445912   43487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:03:19.461851   43487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 11:03:19.478119   43487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 11:03:19.494678   43487 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 11:03:19.499089   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:19.639880   43487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:03:19.653539   43487 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 11:03:19.653562   43487 certs.go:194] generating shared ca certs ...
	I1104 11:03:19.653579   43487 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.653721   43487 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:03:19.653775   43487 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:03:19.653788   43487 certs.go:256] generating profile certs ...
	I1104 11:03:19.653877   43487 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 11:03:19.653912   43487 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0
	I1104 11:03:19.653933   43487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 11:03:19.885027   43487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 ...
	I1104 11:03:19.885059   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0: {Name:mk69f57313434af2e91ed33999be6969db1655d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885262   43487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 ...
	I1104 11:03:19.885278   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0: {Name:mk036af60f5877bd7b54bd0649ec2229ae064452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885373   43487 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 11:03:19.885549   43487 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 11:03:19.885706   43487 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 11:03:19.885722   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:03:19.885740   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:03:19.885756   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:03:19.885778   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:03:19.885796   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:03:19.885822   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:03:19.885840   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:03:19.885858   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:03:19.885925   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:03:19.885964   43487 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:03:19.885979   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:03:19.886014   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:03:19.886046   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:03:19.886078   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:03:19.886131   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:19.886172   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:03:19.886192   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:19.886211   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:03:19.886765   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:03:19.911799   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:03:19.934788   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:03:19.958214   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:03:19.982177   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 11:03:20.006762   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:03:20.032203   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:03:20.056775   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:03:20.081730   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:03:20.107796   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:03:20.132535   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:03:20.157941   43487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:03:20.175127   43487 ssh_runner.go:195] Run: openssl version
	I1104 11:03:20.180513   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:03:20.192022   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196624   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196676   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.202253   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:03:20.212464   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:03:20.224808   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229606   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229653   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.235319   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:03:20.246230   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:03:20.258556   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263132   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263190   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.268984   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:03:20.279291   43487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:03:20.283601   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:03:20.288948   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:03:20.294474   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:03:20.299807   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:03:20.305182   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:03:20.310975   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:03:20.316451   43487 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:03:20.316562   43487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:03:20.316594   43487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:03:20.354362   43487 cri.go:89] found id: "4c3aa3719ea407f31bd76c40125ab3b7bdd92ee408b1f5e698e57298fb7c8bf5"
	I1104 11:03:20.354384   43487 cri.go:89] found id: "b93e0586789e3f2dc0a6a83e13dc87e97cd99bac979bcedff72518a08f43e152"
	I1104 11:03:20.354387   43487 cri.go:89] found id: "801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	I1104 11:03:20.354390   43487 cri.go:89] found id: "400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457"
	I1104 11:03:20.354393   43487 cri.go:89] found id: "49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c"
	I1104 11:03:20.354395   43487 cri.go:89] found id: "f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c"
	I1104 11:03:20.354402   43487 cri.go:89] found id: "4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0"
	I1104 11:03:20.354404   43487 cri.go:89] found id: "6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8"
	I1104 11:03:20.354408   43487 cri.go:89] found id: "e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c"
	I1104 11:03:20.354425   43487 cri.go:89] found id: "4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc"
	I1104 11:03:20.354434   43487 cri.go:89] found id: "82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150"
	I1104 11:03:20.354436   43487 cri.go:89] found id: "f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c"
	I1104 11:03:20.354439   43487 cri.go:89] found id: ""
	I1104 11:03:20.354477   43487 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571: exit status 2 (13.921688202s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-931571" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (57.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (194.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 stop -v=7 --alsologtostderr
E1104 11:11:33.164789   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-931571 stop -v=7 --alsologtostderr: exit status 82 (2m0.469594287s)

                                                
                                                
-- stdout --
	* Stopping node "ha-931571-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:10:21.817825   46316 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:10:21.817955   46316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:10:21.817966   46316 out.go:358] Setting ErrFile to fd 2...
	I1104 11:10:21.817972   46316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:10:21.818162   46316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:10:21.818376   46316 out.go:352] Setting JSON to false
	I1104 11:10:21.818458   46316 mustload.go:65] Loading cluster: ha-931571
	I1104 11:10:21.818830   46316 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:10:21.818913   46316 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 11:10:21.819093   46316 mustload.go:65] Loading cluster: ha-931571
	I1104 11:10:21.819226   46316 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:10:21.819247   46316 stop.go:39] StopHost: ha-931571-m04
	I1104 11:10:21.819602   46316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:10:21.819649   46316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:10:21.834291   46316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I1104 11:10:21.834779   46316 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:10:21.835314   46316 main.go:141] libmachine: Using API Version  1
	I1104 11:10:21.835336   46316 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:10:21.835691   46316 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:10:21.838161   46316 out.go:177] * Stopping node "ha-931571-m04"  ...
	I1104 11:10:21.839638   46316 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1104 11:10:21.839663   46316 main.go:141] libmachine: (ha-931571-m04) Calling .DriverName
	I1104 11:10:21.839861   46316 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1104 11:10:21.839886   46316 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHHostname
	I1104 11:10:21.842797   46316 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 11:10:21.843236   46316 main.go:141] libmachine: (ha-931571-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:27:aa", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 12:06:15 +0000 UTC Type:0 Mac:52:54:00:16:27:aa Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-931571-m04 Clientid:01:52:54:00:16:27:aa}
	I1104 11:10:21.843269   46316 main.go:141] libmachine: (ha-931571-m04) DBG | domain ha-931571-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:16:27:aa in network mk-ha-931571
	I1104 11:10:21.843396   46316 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHPort
	I1104 11:10:21.843565   46316 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHKeyPath
	I1104 11:10:21.843681   46316 main.go:141] libmachine: (ha-931571-m04) Calling .GetSSHUsername
	I1104 11:10:21.843794   46316 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571-m04/id_rsa Username:docker}
	I1104 11:10:21.928491   46316 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1104 11:10:21.981540   46316 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1104 11:10:22.034333   46316 main.go:141] libmachine: Stopping "ha-931571-m04"...
	I1104 11:10:22.034379   46316 main.go:141] libmachine: (ha-931571-m04) Calling .GetState
	I1104 11:10:22.036338   46316 main.go:141] libmachine: (ha-931571-m04) Calling .Stop
	I1104 11:10:22.040083   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 0/120
	I1104 11:10:23.041726   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 1/120
	I1104 11:10:24.043537   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 2/120
	I1104 11:10:25.044736   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 3/120
	I1104 11:10:26.045959   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 4/120
	I1104 11:10:27.047394   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 5/120
	I1104 11:10:28.048665   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 6/120
	I1104 11:10:29.049895   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 7/120
	I1104 11:10:30.051471   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 8/120
	I1104 11:10:31.052648   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 9/120
	I1104 11:10:32.054729   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 10/120
	I1104 11:10:33.056170   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 11/120
	I1104 11:10:34.057560   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 12/120
	I1104 11:10:35.059769   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 13/120
	I1104 11:10:36.061028   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 14/120
	I1104 11:10:37.063112   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 15/120
	I1104 11:10:38.064541   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 16/120
	I1104 11:10:39.065768   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 17/120
	I1104 11:10:40.067856   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 18/120
	I1104 11:10:41.068966   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 19/120
	I1104 11:10:42.071110   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 20/120
	I1104 11:10:43.072758   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 21/120
	I1104 11:10:44.073962   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 22/120
	I1104 11:10:45.075515   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 23/120
	I1104 11:10:46.076844   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 24/120
	I1104 11:10:47.078694   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 25/120
	I1104 11:10:48.080088   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 26/120
	I1104 11:10:49.081563   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 27/120
	I1104 11:10:50.083158   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 28/120
	I1104 11:10:51.084629   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 29/120
	I1104 11:10:52.086592   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 30/120
	I1104 11:10:53.087867   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 31/120
	I1104 11:10:54.089437   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 32/120
	I1104 11:10:55.091855   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 33/120
	I1104 11:10:56.093421   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 34/120
	I1104 11:10:57.095448   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 35/120
	I1104 11:10:58.096556   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 36/120
	I1104 11:10:59.098140   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 37/120
	I1104 11:11:00.099426   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 38/120
	I1104 11:11:01.100902   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 39/120
	I1104 11:11:02.103065   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 40/120
	I1104 11:11:03.104513   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 41/120
	I1104 11:11:04.105995   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 42/120
	I1104 11:11:05.108228   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 43/120
	I1104 11:11:06.109692   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 44/120
	I1104 11:11:07.111639   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 45/120
	I1104 11:11:08.113121   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 46/120
	I1104 11:11:09.114529   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 47/120
	I1104 11:11:10.116339   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 48/120
	I1104 11:11:11.117792   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 49/120
	I1104 11:11:12.119909   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 50/120
	I1104 11:11:13.121478   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 51/120
	I1104 11:11:14.123988   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 52/120
	I1104 11:11:15.125318   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 53/120
	I1104 11:11:16.126710   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 54/120
	I1104 11:11:17.128805   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 55/120
	I1104 11:11:18.130300   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 56/120
	I1104 11:11:19.131657   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 57/120
	I1104 11:11:20.133056   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 58/120
	I1104 11:11:21.134460   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 59/120
	I1104 11:11:22.136831   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 60/120
	I1104 11:11:23.138280   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 61/120
	I1104 11:11:24.139823   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 62/120
	I1104 11:11:25.141275   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 63/120
	I1104 11:11:26.142935   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 64/120
	I1104 11:11:27.144495   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 65/120
	I1104 11:11:28.145874   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 66/120
	I1104 11:11:29.147150   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 67/120
	I1104 11:11:30.148554   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 68/120
	I1104 11:11:31.149990   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 69/120
	I1104 11:11:32.152127   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 70/120
	I1104 11:11:33.153361   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 71/120
	I1104 11:11:34.154976   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 72/120
	I1104 11:11:35.156283   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 73/120
	I1104 11:11:36.157912   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 74/120
	I1104 11:11:37.159952   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 75/120
	I1104 11:11:38.161424   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 76/120
	I1104 11:11:39.163067   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 77/120
	I1104 11:11:40.164690   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 78/120
	I1104 11:11:41.166016   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 79/120
	I1104 11:11:42.168183   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 80/120
	I1104 11:11:43.169272   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 81/120
	I1104 11:11:44.170860   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 82/120
	I1104 11:11:45.172375   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 83/120
	I1104 11:11:46.173819   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 84/120
	I1104 11:11:47.175253   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 85/120
	I1104 11:11:48.176627   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 86/120
	I1104 11:11:49.177890   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 87/120
	I1104 11:11:50.179802   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 88/120
	I1104 11:11:51.181383   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 89/120
	I1104 11:11:52.183603   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 90/120
	I1104 11:11:53.185127   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 91/120
	I1104 11:11:54.186448   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 92/120
	I1104 11:11:55.188121   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 93/120
	I1104 11:11:56.189444   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 94/120
	I1104 11:11:57.191724   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 95/120
	I1104 11:11:58.193300   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 96/120
	I1104 11:11:59.194802   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 97/120
	I1104 11:12:00.196840   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 98/120
	I1104 11:12:01.198220   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 99/120
	I1104 11:12:02.200548   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 100/120
	I1104 11:12:03.202650   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 101/120
	I1104 11:12:04.203959   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 102/120
	I1104 11:12:05.205540   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 103/120
	I1104 11:12:06.206815   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 104/120
	I1104 11:12:07.208336   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 105/120
	I1104 11:12:08.209702   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 106/120
	I1104 11:12:09.210991   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 107/120
	I1104 11:12:10.212320   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 108/120
	I1104 11:12:11.213772   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 109/120
	I1104 11:12:12.215879   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 110/120
	I1104 11:12:13.217473   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 111/120
	I1104 11:12:14.218942   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 112/120
	I1104 11:12:15.220320   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 113/120
	I1104 11:12:16.222006   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 114/120
	I1104 11:12:17.223682   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 115/120
	I1104 11:12:18.224946   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 116/120
	I1104 11:12:19.226389   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 117/120
	I1104 11:12:20.227811   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 118/120
	I1104 11:12:21.229319   46316 main.go:141] libmachine: (ha-931571-m04) Waiting for machine to stop 119/120
	I1104 11:12:22.230629   46316 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1104 11:12:22.230698   46316 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1104 11:12:22.232487   46316 out.go:201] 
	W1104 11:12:22.233940   46316 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1104 11:12:22.233958   46316 out.go:270] * 
	* 
	W1104 11:12:22.236097   46316 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 11:12:22.237357   46316 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-931571 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr
E1104 11:12:56.232395   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr: (45.139166472s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571: exit status 2 (13.24807689s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (1.910427258s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-931571 node start m02 -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571 -v=7                                                           | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-931571 -v=7                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-931571 --wait=true -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:01 UTC | 04 Nov 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	| node    | ha-931571 node delete m03 -v=7                                                   | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-931571 stop -v=7                                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:01:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:01:36.135689   43487 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:01:36.135831   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.135841   43487 out.go:358] Setting ErrFile to fd 2...
	I1104 11:01:36.135848   43487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:01:36.136026   43487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:01:36.136622   43487 out.go:352] Setting JSON to false
	I1104 11:01:36.137570   43487 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6247,"bootTime":1730711849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:01:36.137665   43487 start.go:139] virtualization: kvm guest
	I1104 11:01:36.140736   43487 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:01:36.142255   43487 notify.go:220] Checking for updates...
	I1104 11:01:36.142280   43487 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:01:36.143792   43487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:01:36.145520   43487 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:01:36.147024   43487 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:01:36.148374   43487 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:01:36.150002   43487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:01:36.151746   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.151854   43487 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:01:36.152270   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.152323   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.167782   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1104 11:01:36.168314   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.168871   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.168896   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.169315   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.169538   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.206070   43487 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:01:36.207361   43487 start.go:297] selected driver: kvm2
	I1104 11:01:36.207389   43487 start.go:901] validating driver "kvm2" against &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.207518   43487 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:01:36.207957   43487 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.208077   43487 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:01:36.225111   43487 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:01:36.225913   43487 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:01:36.225946   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:01:36.225978   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:01:36.226027   43487 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:01:36.226141   43487 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:01:36.228347   43487 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 11:01:36.229829   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:01:36.229870   43487 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:01:36.229878   43487 cache.go:56] Caching tarball of preloaded images
	I1104 11:01:36.229952   43487 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:01:36.229964   43487 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:01:36.230064   43487 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 11:01:36.230320   43487 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:01:36.230371   43487 start.go:364] duration metric: took 27.926µs to acquireMachinesLock for "ha-931571"
	I1104 11:01:36.230386   43487 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:01:36.230395   43487 fix.go:54] fixHost starting: 
	I1104 11:01:36.230733   43487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:01:36.230769   43487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:01:36.245984   43487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I1104 11:01:36.246433   43487 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:01:36.246934   43487 main.go:141] libmachine: Using API Version  1
	I1104 11:01:36.246955   43487 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:01:36.247232   43487 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:01:36.247395   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.247568   43487 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:01:36.249147   43487 fix.go:112] recreateIfNeeded on ha-931571: state=Running err=<nil>
	W1104 11:01:36.249199   43487 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:01:36.251132   43487 out.go:177] * Updating the running kvm2 "ha-931571" VM ...
	I1104 11:01:36.252516   43487 machine.go:93] provisionDockerMachine start ...
	I1104 11:01:36.252546   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:01:36.252780   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.255202   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255594   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.255616   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.255731   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.255890   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256009   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.256140   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.256308   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.256489   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.256500   43487 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:01:36.361800   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.361835   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362053   43487 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 11:01:36.362076   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.362273   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.365086   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365550   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.365581   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.365735   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.365939   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366072   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.366277   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.366448   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.366691   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.366706   43487 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 11:01:36.493768   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:01:36.493790   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.496511   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.496961   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.496984   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.497265   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.497539   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497705   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.497875   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.498037   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.498202   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.498219   43487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:01:36.610606   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:01:36.610641   43487 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:01:36.610661   43487 buildroot.go:174] setting up certificates
	I1104 11:01:36.610669   43487 provision.go:84] configureAuth start
	I1104 11:01:36.610679   43487 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:01:36.610955   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:01:36.613714   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614200   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.614230   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.614349   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.616882   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617334   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.617361   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.617589   43487 provision.go:143] copyHostCerts
	I1104 11:01:36.617626   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617677   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:01:36.617689   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:01:36.617752   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:01:36.617831   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617850   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:01:36.617854   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:01:36.617877   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:01:36.617923   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617936   43487 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:01:36.617943   43487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:01:36.617965   43487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:01:36.618012   43487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 11:01:36.828436   43487 provision.go:177] copyRemoteCerts
	I1104 11:01:36.828491   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:01:36.828512   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.830991   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831347   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.831368   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.831530   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.831721   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.831867   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.831960   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:01:36.915764   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:01:36.915847   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:01:36.939587   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:01:36.939667   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1104 11:01:36.963061   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:01:36.963124   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 11:01:36.986150   43487 provision.go:87] duration metric: took 375.467362ms to configureAuth
	I1104 11:01:36.986177   43487 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:01:36.986415   43487 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:01:36.986508   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:01:36.988810   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989158   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:01:36.989186   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:01:36.989401   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:01:36.989591   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989752   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:01:36.989860   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:01:36.989990   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:01:36.990180   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:01:36.990196   43487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:03:07.637315   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:03:07.637355   43487 machine.go:96] duration metric: took 1m31.384824491s to provisionDockerMachine
	I1104 11:03:07.637369   43487 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 11:03:07.637384   43487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:03:07.637404   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.637761   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:03:07.637793   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.640901   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641365   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.641386   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.641580   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.641782   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.641937   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.642057   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.723354   43487 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:03:07.727749   43487 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:03:07.727790   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:03:07.727866   43487 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:03:07.727978   43487 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:03:07.727992   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:03:07.728104   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:03:07.737590   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:07.760844   43487 start.go:296] duration metric: took 123.46114ms for postStartSetup
	I1104 11:03:07.760883   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.761154   43487 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1104 11:03:07.761179   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.763801   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764219   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.764250   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.764422   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.764610   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.764765   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.764923   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	W1104 11:03:07.847152   43487 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1104 11:03:07.847183   43487 fix.go:56] duration metric: took 1m31.616787199s for fixHost
	I1104 11:03:07.847210   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.849780   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850080   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.850103   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.850285   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.850444   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850572   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.850663   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.850778   43487 main.go:141] libmachine: Using SSH client type: native
	I1104 11:03:07.850921   43487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:03:07.850932   43487 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:03:07.957716   43487 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730718187.926175534
	
	I1104 11:03:07.957740   43487 fix.go:216] guest clock: 1730718187.926175534
	I1104 11:03:07.957749   43487 fix.go:229] Guest: 2024-11-04 11:03:07.926175534 +0000 UTC Remote: 2024-11-04 11:03:07.847191367 +0000 UTC m=+91.749611169 (delta=78.984167ms)
	I1104 11:03:07.957775   43487 fix.go:200] guest clock delta is within tolerance: 78.984167ms
	I1104 11:03:07.957780   43487 start.go:83] releasing machines lock for "ha-931571", held for 1m31.727399754s
	I1104 11:03:07.957797   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.958011   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:07.960277   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960596   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.960623   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.960746   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961392   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961589   43487 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:03:07.961682   43487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:03:07.961744   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.961789   43487 ssh_runner.go:195] Run: cat /version.json
	I1104 11:03:07.961812   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:03:07.964564   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964779   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.964935   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.964958   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965102   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:07.965115   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965127   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:07.965307   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:03:07.965321   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965465   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965475   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:03:07.965612   43487 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:03:07.965607   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:07.965735   43487 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:03:08.062470   43487 ssh_runner.go:195] Run: systemctl --version
	I1104 11:03:08.068016   43487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:03:08.217034   43487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:03:08.225627   43487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:03:08.225681   43487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:03:08.234588   43487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:03:08.234609   43487 start.go:495] detecting cgroup driver to use...
	I1104 11:03:08.234668   43487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:03:08.250011   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:03:08.263678   43487 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:03:08.263727   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:03:08.276778   43487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:03:08.289631   43487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:03:08.436219   43487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:03:08.580306   43487 docker.go:233] disabling docker service ...
	I1104 11:03:08.580381   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:03:08.598849   43487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:03:08.611846   43487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:03:08.752818   43487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:03:08.900497   43487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:03:08.913868   43487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:03:08.931418   43487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:03:08.931481   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.942464   43487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:03:08.942519   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.952702   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.963648   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.973838   43487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:03:08.984434   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:08.995143   43487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.005343   43487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:03:09.015650   43487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:03:09.024728   43487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:03:09.034012   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:09.180518   43487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:03:19.158217   43487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.977660206s)
	I1104 11:03:19.158256   43487 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:03:19.158312   43487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:03:19.163030   43487 start.go:563] Will wait 60s for crictl version
	I1104 11:03:19.163087   43487 ssh_runner.go:195] Run: which crictl
	I1104 11:03:19.166614   43487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:03:19.198130   43487 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:03:19.198200   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.225725   43487 ssh_runner.go:195] Run: crio --version
	I1104 11:03:19.256273   43487 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:03:19.257947   43487 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:03:19.260526   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.260966   43487 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:03:19.260989   43487 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:03:19.261303   43487 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:03:19.265771   43487 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:03:19.265898   43487 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:03:19.265937   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.311790   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.311812   43487 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:03:19.311863   43487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:03:19.345725   43487 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:03:19.345751   43487 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:03:19.345760   43487 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 11:03:19.345861   43487 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:03:19.345923   43487 ssh_runner.go:195] Run: crio config
	I1104 11:03:19.399886   43487 cni.go:84] Creating CNI manager for ""
	I1104 11:03:19.399909   43487 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:03:19.399922   43487 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:03:19.399956   43487 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:03:19.400106   43487 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:03:19.400126   43487 kube-vip.go:115] generating kube-vip config ...
	I1104 11:03:19.400180   43487 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 11:03:19.411359   43487 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 11:03:19.411489   43487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 11:03:19.411549   43487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:03:19.420430   43487 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:03:19.420500   43487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 11:03:19.429659   43487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 11:03:19.445912   43487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:03:19.461851   43487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 11:03:19.478119   43487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 11:03:19.494678   43487 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 11:03:19.499089   43487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:03:19.639880   43487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:03:19.653539   43487 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 11:03:19.653562   43487 certs.go:194] generating shared ca certs ...
	I1104 11:03:19.653579   43487 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.653721   43487 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:03:19.653775   43487 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:03:19.653788   43487 certs.go:256] generating profile certs ...
	I1104 11:03:19.653877   43487 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 11:03:19.653912   43487 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0
	I1104 11:03:19.653933   43487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.245 192.168.39.57 192.168.39.254]
	I1104 11:03:19.885027   43487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 ...
	I1104 11:03:19.885059   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0: {Name:mk69f57313434af2e91ed33999be6969db1655d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885262   43487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 ...
	I1104 11:03:19.885278   43487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0: {Name:mk036af60f5877bd7b54bd0649ec2229ae064452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:03:19.885373   43487 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt
	I1104 11:03:19.885549   43487 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key
	I1104 11:03:19.885706   43487 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 11:03:19.885722   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:03:19.885740   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:03:19.885756   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:03:19.885778   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:03:19.885796   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:03:19.885822   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:03:19.885840   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:03:19.885858   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:03:19.885925   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:03:19.885964   43487 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:03:19.885979   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:03:19.886014   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:03:19.886046   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:03:19.886078   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:03:19.886131   43487 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:03:19.886172   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:03:19.886192   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:19.886211   43487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:03:19.886765   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:03:19.911799   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:03:19.934788   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:03:19.958214   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:03:19.982177   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 11:03:20.006762   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:03:20.032203   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:03:20.056775   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:03:20.081730   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:03:20.107796   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:03:20.132535   43487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:03:20.157941   43487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:03:20.175127   43487 ssh_runner.go:195] Run: openssl version
	I1104 11:03:20.180513   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:03:20.192022   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196624   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.196676   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:03:20.202253   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:03:20.212464   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:03:20.224808   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229606   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.229653   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:03:20.235319   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:03:20.246230   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:03:20.258556   43487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263132   43487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.263190   43487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:03:20.268984   43487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:03:20.279291   43487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:03:20.283601   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:03:20.288948   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:03:20.294474   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:03:20.299807   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:03:20.305182   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:03:20.310975   43487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:03:20.316451   43487 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:03:20.316562   43487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:03:20.316594   43487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:03:20.354362   43487 cri.go:89] found id: "4c3aa3719ea407f31bd76c40125ab3b7bdd92ee408b1f5e698e57298fb7c8bf5"
	I1104 11:03:20.354384   43487 cri.go:89] found id: "b93e0586789e3f2dc0a6a83e13dc87e97cd99bac979bcedff72518a08f43e152"
	I1104 11:03:20.354387   43487 cri.go:89] found id: "801830521b8c68ec780508f94b0f3d8c52a6c3d5458328e719c2ce3178c47cc3"
	I1104 11:03:20.354390   43487 cri.go:89] found id: "400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457"
	I1104 11:03:20.354393   43487 cri.go:89] found id: "49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c"
	I1104 11:03:20.354395   43487 cri.go:89] found id: "f8efbd7a72ea51074ffa14c6c164b0072c5d57e24d1bd5b6d1a123aa8216069c"
	I1104 11:03:20.354402   43487 cri.go:89] found id: "4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0"
	I1104 11:03:20.354404   43487 cri.go:89] found id: "6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8"
	I1104 11:03:20.354408   43487 cri.go:89] found id: "e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c"
	I1104 11:03:20.354425   43487 cri.go:89] found id: "4572c8bcb28cdf71917ee1df07e150610c3e183aaa1243eb84ab3c083f31f7bc"
	I1104 11:03:20.354434   43487 cri.go:89] found id: "82e4be064be10644428d59bf1bc4467a8666cf78ec7b830a51e614de7c4b3150"
	I1104 11:03:20.354436   43487 cri.go:89] found id: "f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c"
	I1104 11:03:20.354439   43487 cri.go:89] found id: ""
	I1104 11:03:20.354477   43487 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571: exit status 2 (13.235902177s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-931571" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (194.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (555.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-931571 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1104 11:14:47.409392   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:16:33.168410   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:19:47.409017   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:21:33.165154   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-931571 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: signal: killed (8m45.537545962s)

                                                
                                                
-- stdout --
	* [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	* Updating the running kvm2 "ha-931571" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-931571-m02" control-plane node in "ha-931571" cluster
	* Updating the running kvm2 "ha-931571-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.67
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.67
	* Verifying Kubernetes components...
	
	* Starting "ha-931571-m03" control-plane node in "ha-931571" cluster
	* Updating the running kvm2 "ha-931571-m03" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.67,192.168.39.245
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.67
	  - env NO_PROXY=192.168.39.67,192.168.39.245
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:13:35.833415   47155 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:13:35.833528   47155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:13:35.833536   47155 out.go:358] Setting ErrFile to fd 2...
	I1104 11:13:35.833541   47155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:13:35.833736   47155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:13:35.834262   47155 out.go:352] Setting JSON to false
	I1104 11:13:35.835261   47155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6967,"bootTime":1730711849,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:13:35.835379   47155 start.go:139] virtualization: kvm guest
	I1104 11:13:35.838681   47155 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:13:35.840450   47155 notify.go:220] Checking for updates...
	I1104 11:13:35.840461   47155 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:13:35.842043   47155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:13:35.843379   47155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:13:35.844857   47155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:13:35.846315   47155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:13:35.847599   47155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:13:35.849318   47155 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:13:35.849720   47155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:13:35.849786   47155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:13:35.864740   47155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I1104 11:13:35.865262   47155 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:13:35.865773   47155 main.go:141] libmachine: Using API Version  1
	I1104 11:13:35.865803   47155 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:13:35.866155   47155 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:13:35.866366   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.866590   47155 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:13:35.866870   47155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:13:35.866904   47155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:13:35.882018   47155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I1104 11:13:35.882534   47155 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:13:35.883110   47155 main.go:141] libmachine: Using API Version  1
	I1104 11:13:35.883130   47155 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:13:35.883429   47155 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:13:35.883609   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.922883   47155 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:13:35.924737   47155 start.go:297] selected driver: kvm2
	I1104 11:13:35.924750   47155 start.go:901] validating driver "kvm2" against &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:13:35.924898   47155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:13:35.925291   47155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:13:35.925368   47155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:13:35.940642   47155 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:13:35.941782   47155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:13:35.941821   47155 cni.go:84] Creating CNI manager for ""
	I1104 11:13:35.941856   47155 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:13:35.941946   47155 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:13:35.942135   47155 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:13:35.944801   47155 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 11:13:35.946212   47155 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:13:35.946248   47155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:13:35.946258   47155 cache.go:56] Caching tarball of preloaded images
	I1104 11:13:35.946326   47155 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:13:35.946336   47155 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:13:35.946433   47155 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 11:13:35.946612   47155 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:13:35.946650   47155 start.go:364] duration metric: took 21.616µs to acquireMachinesLock for "ha-931571"
	I1104 11:13:35.946663   47155 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:13:35.946671   47155 fix.go:54] fixHost starting: 
	I1104 11:13:35.946903   47155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:13:35.946933   47155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:13:35.962455   47155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I1104 11:13:35.962837   47155 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:13:35.963304   47155 main.go:141] libmachine: Using API Version  1
	I1104 11:13:35.963328   47155 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:13:35.963646   47155 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:13:35.963825   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.963930   47155 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:13:35.965689   47155 fix.go:112] recreateIfNeeded on ha-931571: state=Running err=<nil>
	W1104 11:13:35.965706   47155 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:13:35.968357   47155 out.go:177] * Updating the running kvm2 "ha-931571" VM ...
	I1104 11:13:35.969541   47155 machine.go:93] provisionDockerMachine start ...
	I1104 11:13:35.969561   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.969763   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:35.972166   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:35.972610   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:35.972639   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:35.972777   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:35.972932   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:35.973074   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:35.973203   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:35.973385   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:35.973579   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:35.973590   47155 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:13:36.086208   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:13:36.086239   47155 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:13:36.086483   47155 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 11:13:36.086503   47155 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:13:36.086693   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.089373   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.089784   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.089810   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.090068   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.090243   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.090495   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.090654   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.090812   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:36.090965   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:36.090980   47155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 11:13:36.212126   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:13:36.212165   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.215087   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.215461   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.215488   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.215679   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.215853   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.216022   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.216178   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.216353   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:36.216552   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:36.216571   47155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:13:36.322195   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:13:36.322220   47155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:13:36.322240   47155 buildroot.go:174] setting up certificates
	I1104 11:13:36.322248   47155 provision.go:84] configureAuth start
	I1104 11:13:36.322255   47155 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:13:36.322519   47155 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:13:36.324706   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.324996   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.325029   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.325182   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.327094   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.327427   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.327463   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.327609   47155 provision.go:143] copyHostCerts
	I1104 11:13:36.327648   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:13:36.327694   47155 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:13:36.327707   47155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:13:36.327791   47155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:13:36.327908   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:13:36.327935   47155 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:13:36.327944   47155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:13:36.327981   47155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:13:36.328044   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:13:36.328068   47155 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:13:36.328079   47155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:13:36.328115   47155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:13:36.328179   47155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 11:13:36.585358   47155 provision.go:177] copyRemoteCerts
	I1104 11:13:36.585424   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:13:36.585452   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.588494   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.588871   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.588893   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.589067   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.589270   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.589418   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.589528   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:13:36.671933   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:13:36.672013   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:13:36.695410   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:13:36.695500   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1104 11:13:36.721157   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:13:36.721218   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 11:13:36.744359   47155 provision.go:87] duration metric: took 422.101487ms to configureAuth
	I1104 11:13:36.744385   47155 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:13:36.744588   47155 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:13:36.744649   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.747350   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.747754   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.747780   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.748027   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.748231   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.748381   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.748564   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.748718   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:36.748871   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:36.748886   47155 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:15:11.237727   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:15:11.237754   47155 machine.go:96] duration metric: took 1m35.268199493s to provisionDockerMachine
	I1104 11:15:11.237771   47155 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 11:15:11.237785   47155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:15:11.237805   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.238085   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:15:11.238112   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.241258   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.241697   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.241732   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.241888   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.242062   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.242182   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.242331   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:15:11.323226   47155 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:15:11.327204   47155 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:15:11.327224   47155 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:15:11.327279   47155 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:15:11.327369   47155 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:15:11.327380   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:15:11.327470   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:15:11.337070   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:15:11.360955   47155 start.go:296] duration metric: took 123.170374ms for postStartSetup
	I1104 11:15:11.361006   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.361320   47155 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1104 11:15:11.361354   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.364238   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.364593   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.364627   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.364774   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.364944   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.365070   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.365172   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	W1104 11:15:11.447399   47155 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1104 11:15:11.447421   47155 fix.go:56] duration metric: took 1m35.500750552s for fixHost
	I1104 11:15:11.447441   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.450343   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.450768   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.450794   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.450960   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.451163   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.451310   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.451436   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.451593   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:15:11.451745   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:15:11.451755   47155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:15:11.557707   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730718911.512624765
	
	I1104 11:15:11.557731   47155 fix.go:216] guest clock: 1730718911.512624765
	I1104 11:15:11.557745   47155 fix.go:229] Guest: 2024-11-04 11:15:11.512624765 +0000 UTC Remote: 2024-11-04 11:15:11.447426971 +0000 UTC m=+95.651542445 (delta=65.197794ms)
	I1104 11:15:11.557783   47155 fix.go:200] guest clock delta is within tolerance: 65.197794ms
	I1104 11:15:11.557788   47155 start.go:83] releasing machines lock for "ha-931571", held for 1m35.611129875s
	I1104 11:15:11.557825   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.558081   47155 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:15:11.560481   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.560851   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.560879   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.560998   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.561559   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.561744   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.561826   47155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:15:11.561887   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.561909   47155 ssh_runner.go:195] Run: cat /version.json
	I1104 11:15:11.561933   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.564504   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.564610   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.564887   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.564911   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.565012   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.565036   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.565048   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.565243   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.565250   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.565374   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.565515   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.565516   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:15:11.565649   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.565868   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:15:11.641607   47155 ssh_runner.go:195] Run: systemctl --version
	I1104 11:15:11.664297   47155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:15:11.816410   47155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:15:11.823537   47155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:15:11.823599   47155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:15:11.833021   47155 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:15:11.833045   47155 start.go:495] detecting cgroup driver to use...
	I1104 11:15:11.833097   47155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:15:11.852396   47155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:15:11.866942   47155 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:15:11.866991   47155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:15:11.882656   47155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:15:11.895965   47155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:15:12.061549   47155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:15:12.218454   47155 docker.go:233] disabling docker service ...
	I1104 11:15:12.218530   47155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:15:12.235348   47155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:15:12.248986   47155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:15:12.395105   47155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:15:12.539842   47155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:15:12.554055   47155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:15:12.573091   47155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:15:12.573140   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.583603   47155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:15:12.583669   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.593940   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.604217   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.614393   47155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:15:12.624942   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.635708   47155 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.648677   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.659398   47155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:15:12.669092   47155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:15:12.678942   47155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:15:12.828126   47155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:15:23.174830   47155 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.346665248s)
	I1104 11:15:23.174858   47155 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:15:23.174913   47155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:15:23.179808   47155 start.go:563] Will wait 60s for crictl version
	I1104 11:15:23.179876   47155 ssh_runner.go:195] Run: which crictl
	I1104 11:15:23.183697   47155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:15:23.220857   47155 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:15:23.220921   47155 ssh_runner.go:195] Run: crio --version
	I1104 11:15:23.252313   47155 ssh_runner.go:195] Run: crio --version
	I1104 11:15:23.284817   47155 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:15:23.286259   47155 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:15:23.288997   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:23.289329   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:23.289353   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:23.289532   47155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:15:23.294322   47155 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:15:23.294454   47155 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:15:23.294492   47155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:15:23.343349   47155 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:15:23.343375   47155 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:15:23.343434   47155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:15:23.383346   47155 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:15:23.383372   47155 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:15:23.383384   47155 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 11:15:23.383490   47155 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:15:23.383576   47155 ssh_runner.go:195] Run: crio config
	I1104 11:15:23.433443   47155 cni.go:84] Creating CNI manager for ""
	I1104 11:15:23.433463   47155 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:15:23.433474   47155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:15:23.433493   47155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:15:23.433602   47155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:15:23.433622   47155 kube-vip.go:115] generating kube-vip config ...
	I1104 11:15:23.433680   47155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 11:15:23.445718   47155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 11:15:23.445843   47155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 11:15:23.445904   47155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:15:23.456101   47155 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:15:23.456229   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 11:15:23.465920   47155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 11:15:23.484546   47155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:15:23.502574   47155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 11:15:23.519299   47155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 11:15:23.537364   47155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 11:15:23.541370   47155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:15:23.685272   47155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:15:23.699164   47155 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 11:15:23.699189   47155 certs.go:194] generating shared ca certs ...
	I1104 11:15:23.699210   47155 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:15:23.699410   47155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:15:23.699461   47155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:15:23.699472   47155 certs.go:256] generating profile certs ...
	I1104 11:15:23.699548   47155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 11:15:23.699595   47155 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0
	I1104 11:15:23.699627   47155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 11:15:23.699638   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:15:23.699651   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:15:23.699664   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:15:23.699676   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:15:23.699688   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:15:23.699707   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:15:23.699721   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:15:23.699733   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:15:23.699797   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:15:23.699834   47155 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:15:23.699846   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:15:23.699870   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:15:23.699892   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:15:23.699913   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:15:23.699958   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:15:23.699984   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:23.699998   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:15:23.700009   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:15:23.700512   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:15:23.737076   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:15:23.760690   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:15:23.783795   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:15:23.807760   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1104 11:15:23.830826   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:15:23.854084   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:15:23.877130   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:15:23.900357   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:15:23.923251   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:15:23.948861   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:15:23.972158   47155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:15:23.988716   47155 ssh_runner.go:195] Run: openssl version
	I1104 11:15:23.995042   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:15:24.005428   47155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:24.009786   47155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:24.009843   47155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:24.015353   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:15:24.024511   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:15:24.035047   47155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:15:24.039305   47155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:15:24.039370   47155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:15:24.044673   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:15:24.053335   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:15:24.064269   47155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:15:24.068439   47155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:15:24.068486   47155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:15:24.073853   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:15:24.084074   47155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:15:24.088827   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:15:24.094829   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:15:24.100416   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:15:24.106407   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:15:24.112126   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:15:24.118087   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:15:24.123465   47155 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:15:24.123582   47155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:15:24.123627   47155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:15:24.162201   47155 cri.go:89] found id: "980ffa884bf1968085bea9cf96dc67778f66830114d25dcc925b9353dd580cf0"
	I1104 11:15:24.162224   47155 cri.go:89] found id: "eebace40bf5a0e44ca473748c3ac74f32534455df431e58868b18dbd29a75bc4"
	I1104 11:15:24.162227   47155 cri.go:89] found id: "e3520e30168ef0fa6e24b3acd32cd2d4d097bddebb5882228350cb535bb460ae"
	I1104 11:15:24.162231   47155 cri.go:89] found id: "505870b9ae2df71f4b23cc11410c8e473700796ad8ea821e95ef0d6450e19c07"
	I1104 11:15:24.162233   47155 cri.go:89] found id: "4346e6f7bd5ae74e45e5a01ce9e39d75d483afe0668b46714d9f3c0ed51d039f"
	I1104 11:15:24.162276   47155 cri.go:89] found id: "85af9be253a6f51906cda8de0c24cfb3859c88d78b7562bc0a04c1a9ad88084a"
	I1104 11:15:24.162280   47155 cri.go:89] found id: "f541f4f56fff4e97bfb0486bf66e7d0b1fcb33be7981c634d6288968e0725d1d"
	I1104 11:15:24.162284   47155 cri.go:89] found id: "11e654338db96a4eec3bd67cfb6bc96567faa44e74ba8612875f6f460873f75e"
	I1104 11:15:24.162287   47155 cri.go:89] found id: "3f037008e4abae5272f0c56bf5d393effec74e71d2e8ec4f1dd2e34bc727e84a"
	I1104 11:15:24.162291   47155 cri.go:89] found id: "6a5842dbce0b9073ea1cdaf9f46490b0b17c2f8caba15eee43ef8bd7ae61031d"
	I1104 11:15:24.162295   47155 cri.go:89] found id: "b750e61badedb7752f13b56edeea37c41587c4575af18b26d813ab3901801e32"
	I1104 11:15:24.162297   47155 cri.go:89] found id: "4333666a18046abfcb00e2185256e5f4724ef5f36d6799de7e24f5c728cd786d"
	I1104 11:15:24.162300   47155 cri.go:89] found id: "d3514c60b5b4903f368258aac7561bc983b67f7e8198ca3fd59de321c7176c9f"
	I1104 11:15:24.162302   47155 cri.go:89] found id: "400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457"
	I1104 11:15:24.162306   47155 cri.go:89] found id: "49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c"
	I1104 11:15:24.162308   47155 cri.go:89] found id: "4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0"
	I1104 11:15:24.162310   47155 cri.go:89] found id: "6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8"
	I1104 11:15:24.162315   47155 cri.go:89] found id: "e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c"
	I1104 11:15:24.162318   47155 cri.go:89] found id: "f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c"
	I1104 11:15:24.162320   47155 cri.go:89] found id: ""
	I1104 11:15:24.162355   47155 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-931571 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-931571 -n ha-931571: exit status 2 (13.000058616s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-931571 logs -n 25: (2.021481191s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m04 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp testdata/cp-test.txt                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt                       |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571 sudo cat                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571.txt                                 |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m02 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n                                                                 | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | ha-931571-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-931571 ssh -n ha-931571-m03 sudo cat                                          | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC | 04 Nov 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-931571 node stop m02 -v=7                                                     | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-931571 node start m02 -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571 -v=7                                                           | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-931571 -v=7                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 10:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-931571 --wait=true -v=7                                                    | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:01 UTC | 04 Nov 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-931571                                                                | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	| node    | ha-931571 node delete m03 -v=7                                                   | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-931571 stop -v=7                                                              | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-931571 --wait=true                                                         | ha-931571 | jenkins | v1.34.0 | 04 Nov 24 11:13 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:13:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:13:35.833415   47155 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:13:35.833528   47155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:13:35.833536   47155 out.go:358] Setting ErrFile to fd 2...
	I1104 11:13:35.833541   47155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:13:35.833736   47155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:13:35.834262   47155 out.go:352] Setting JSON to false
	I1104 11:13:35.835261   47155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6967,"bootTime":1730711849,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:13:35.835379   47155 start.go:139] virtualization: kvm guest
	I1104 11:13:35.838681   47155 out.go:177] * [ha-931571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:13:35.840450   47155 notify.go:220] Checking for updates...
	I1104 11:13:35.840461   47155 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:13:35.842043   47155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:13:35.843379   47155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:13:35.844857   47155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:13:35.846315   47155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:13:35.847599   47155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:13:35.849318   47155 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:13:35.849720   47155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:13:35.849786   47155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:13:35.864740   47155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I1104 11:13:35.865262   47155 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:13:35.865773   47155 main.go:141] libmachine: Using API Version  1
	I1104 11:13:35.865803   47155 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:13:35.866155   47155 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:13:35.866366   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.866590   47155 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:13:35.866870   47155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:13:35.866904   47155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:13:35.882018   47155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I1104 11:13:35.882534   47155 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:13:35.883110   47155 main.go:141] libmachine: Using API Version  1
	I1104 11:13:35.883130   47155 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:13:35.883429   47155 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:13:35.883609   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.922883   47155 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:13:35.924737   47155 start.go:297] selected driver: kvm2
	I1104 11:13:35.924750   47155 start.go:901] validating driver "kvm2" against &{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:13:35.924898   47155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:13:35.925291   47155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:13:35.925368   47155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:13:35.940642   47155 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:13:35.941782   47155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:13:35.941821   47155 cni.go:84] Creating CNI manager for ""
	I1104 11:13:35.941856   47155 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:13:35.941946   47155 start.go:340] cluster config:
	{Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:13:35.942135   47155 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:13:35.944801   47155 out.go:177] * Starting "ha-931571" primary control-plane node in "ha-931571" cluster
	I1104 11:13:35.946212   47155 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:13:35.946248   47155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:13:35.946258   47155 cache.go:56] Caching tarball of preloaded images
	I1104 11:13:35.946326   47155 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:13:35.946336   47155 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:13:35.946433   47155 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/config.json ...
	I1104 11:13:35.946612   47155 start.go:360] acquireMachinesLock for ha-931571: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:13:35.946650   47155 start.go:364] duration metric: took 21.616µs to acquireMachinesLock for "ha-931571"
	I1104 11:13:35.946663   47155 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:13:35.946671   47155 fix.go:54] fixHost starting: 
	I1104 11:13:35.946903   47155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:13:35.946933   47155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:13:35.962455   47155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I1104 11:13:35.962837   47155 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:13:35.963304   47155 main.go:141] libmachine: Using API Version  1
	I1104 11:13:35.963328   47155 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:13:35.963646   47155 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:13:35.963825   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.963930   47155 main.go:141] libmachine: (ha-931571) Calling .GetState
	I1104 11:13:35.965689   47155 fix.go:112] recreateIfNeeded on ha-931571: state=Running err=<nil>
	W1104 11:13:35.965706   47155 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:13:35.968357   47155 out.go:177] * Updating the running kvm2 "ha-931571" VM ...
	I1104 11:13:35.969541   47155 machine.go:93] provisionDockerMachine start ...
	I1104 11:13:35.969561   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:13:35.969763   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:35.972166   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:35.972610   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:35.972639   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:35.972777   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:35.972932   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:35.973074   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:35.973203   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:35.973385   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:35.973579   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:35.973590   47155 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:13:36.086208   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:13:36.086239   47155 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:13:36.086483   47155 buildroot.go:166] provisioning hostname "ha-931571"
	I1104 11:13:36.086503   47155 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:13:36.086693   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.089373   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.089784   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.089810   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.090068   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.090243   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.090495   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.090654   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.090812   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:36.090965   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:36.090980   47155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-931571 && echo "ha-931571" | sudo tee /etc/hostname
	I1104 11:13:36.212126   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-931571
	
	I1104 11:13:36.212165   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.215087   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.215461   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.215488   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.215679   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.215853   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.216022   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.216178   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.216353   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:36.216552   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:36.216571   47155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-931571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-931571/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-931571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:13:36.322195   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:13:36.322220   47155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:13:36.322240   47155 buildroot.go:174] setting up certificates
	I1104 11:13:36.322248   47155 provision.go:84] configureAuth start
	I1104 11:13:36.322255   47155 main.go:141] libmachine: (ha-931571) Calling .GetMachineName
	I1104 11:13:36.322519   47155 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:13:36.324706   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.324996   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.325029   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.325182   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.327094   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.327427   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.327463   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.327609   47155 provision.go:143] copyHostCerts
	I1104 11:13:36.327648   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:13:36.327694   47155 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:13:36.327707   47155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:13:36.327791   47155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:13:36.327908   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:13:36.327935   47155 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:13:36.327944   47155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:13:36.327981   47155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:13:36.328044   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:13:36.328068   47155 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:13:36.328079   47155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:13:36.328115   47155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:13:36.328179   47155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.ha-931571 san=[127.0.0.1 192.168.39.67 ha-931571 localhost minikube]
	I1104 11:13:36.585358   47155 provision.go:177] copyRemoteCerts
	I1104 11:13:36.585424   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:13:36.585452   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.588494   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.588871   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.588893   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.589067   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.589270   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.589418   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.589528   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:13:36.671933   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:13:36.672013   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:13:36.695410   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:13:36.695500   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1104 11:13:36.721157   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:13:36.721218   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 11:13:36.744359   47155 provision.go:87] duration metric: took 422.101487ms to configureAuth
	I1104 11:13:36.744385   47155 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:13:36.744588   47155 config.go:182] Loaded profile config "ha-931571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:13:36.744649   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:13:36.747350   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.747754   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:13:36.747780   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:13:36.748027   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:13:36.748231   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.748381   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:13:36.748564   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:13:36.748718   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:13:36.748871   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:13:36.748886   47155 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:15:11.237727   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:15:11.237754   47155 machine.go:96] duration metric: took 1m35.268199493s to provisionDockerMachine
	I1104 11:15:11.237771   47155 start.go:293] postStartSetup for "ha-931571" (driver="kvm2")
	I1104 11:15:11.237785   47155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:15:11.237805   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.238085   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:15:11.238112   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.241258   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.241697   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.241732   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.241888   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.242062   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.242182   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.242331   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:15:11.323226   47155 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:15:11.327204   47155 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:15:11.327224   47155 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:15:11.327279   47155 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:15:11.327369   47155 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:15:11.327380   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:15:11.327470   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:15:11.337070   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:15:11.360955   47155 start.go:296] duration metric: took 123.170374ms for postStartSetup
	I1104 11:15:11.361006   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.361320   47155 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1104 11:15:11.361354   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.364238   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.364593   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.364627   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.364774   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.364944   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.365070   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.365172   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	W1104 11:15:11.447399   47155 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1104 11:15:11.447421   47155 fix.go:56] duration metric: took 1m35.500750552s for fixHost
	I1104 11:15:11.447441   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.450343   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.450768   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.450794   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.450960   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.451163   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.451310   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.451436   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.451593   47155 main.go:141] libmachine: Using SSH client type: native
	I1104 11:15:11.451745   47155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1104 11:15:11.451755   47155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:15:11.557707   47155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730718911.512624765
	
	I1104 11:15:11.557731   47155 fix.go:216] guest clock: 1730718911.512624765
	I1104 11:15:11.557745   47155 fix.go:229] Guest: 2024-11-04 11:15:11.512624765 +0000 UTC Remote: 2024-11-04 11:15:11.447426971 +0000 UTC m=+95.651542445 (delta=65.197794ms)
	I1104 11:15:11.557783   47155 fix.go:200] guest clock delta is within tolerance: 65.197794ms
	I1104 11:15:11.557788   47155 start.go:83] releasing machines lock for "ha-931571", held for 1m35.611129875s
	I1104 11:15:11.557825   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.558081   47155 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:15:11.560481   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.560851   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.560879   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.560998   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.561559   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.561744   47155 main.go:141] libmachine: (ha-931571) Calling .DriverName
	I1104 11:15:11.561826   47155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:15:11.561887   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.561909   47155 ssh_runner.go:195] Run: cat /version.json
	I1104 11:15:11.561933   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHHostname
	I1104 11:15:11.564504   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.564610   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.564887   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.564911   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.565012   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:11.565036   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:11.565048   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.565243   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.565250   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHPort
	I1104 11:15:11.565374   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.565515   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHKeyPath
	I1104 11:15:11.565516   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:15:11.565649   47155 main.go:141] libmachine: (ha-931571) Calling .GetSSHUsername
	I1104 11:15:11.565868   47155 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/ha-931571/id_rsa Username:docker}
	I1104 11:15:11.641607   47155 ssh_runner.go:195] Run: systemctl --version
	I1104 11:15:11.664297   47155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:15:11.816410   47155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:15:11.823537   47155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:15:11.823599   47155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:15:11.833021   47155 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:15:11.833045   47155 start.go:495] detecting cgroup driver to use...
	I1104 11:15:11.833097   47155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:15:11.852396   47155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:15:11.866942   47155 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:15:11.866991   47155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:15:11.882656   47155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:15:11.895965   47155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:15:12.061549   47155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:15:12.218454   47155 docker.go:233] disabling docker service ...
	I1104 11:15:12.218530   47155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:15:12.235348   47155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:15:12.248986   47155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:15:12.395105   47155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:15:12.539842   47155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:15:12.554055   47155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:15:12.573091   47155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:15:12.573140   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.583603   47155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:15:12.583669   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.593940   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.604217   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.614393   47155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:15:12.624942   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.635708   47155 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.648677   47155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:15:12.659398   47155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:15:12.669092   47155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:15:12.678942   47155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:15:12.828126   47155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:15:23.174830   47155 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.346665248s)
	I1104 11:15:23.174858   47155 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:15:23.174913   47155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:15:23.179808   47155 start.go:563] Will wait 60s for crictl version
	I1104 11:15:23.179876   47155 ssh_runner.go:195] Run: which crictl
	I1104 11:15:23.183697   47155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:15:23.220857   47155 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:15:23.220921   47155 ssh_runner.go:195] Run: crio --version
	I1104 11:15:23.252313   47155 ssh_runner.go:195] Run: crio --version
	I1104 11:15:23.284817   47155 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:15:23.286259   47155 main.go:141] libmachine: (ha-931571) Calling .GetIP
	I1104 11:15:23.288997   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:23.289329   47155 main.go:141] libmachine: (ha-931571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:cb:16", ip: ""} in network mk-ha-931571: {Iface:virbr1 ExpiryTime:2024-11-04 11:52:35 +0000 UTC Type:0 Mac:52:54:00:2c:cb:16 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-931571 Clientid:01:52:54:00:2c:cb:16}
	I1104 11:15:23.289353   47155 main.go:141] libmachine: (ha-931571) DBG | domain ha-931571 has defined IP address 192.168.39.67 and MAC address 52:54:00:2c:cb:16 in network mk-ha-931571
	I1104 11:15:23.289532   47155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:15:23.294322   47155 kubeadm.go:883] updating cluster {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:15:23.294454   47155 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:15:23.294492   47155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:15:23.343349   47155 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:15:23.343375   47155 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:15:23.343434   47155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:15:23.383346   47155 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:15:23.383372   47155 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:15:23.383384   47155 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1104 11:15:23.383490   47155 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-931571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:15:23.383576   47155 ssh_runner.go:195] Run: crio config
	I1104 11:15:23.433443   47155 cni.go:84] Creating CNI manager for ""
	I1104 11:15:23.433463   47155 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1104 11:15:23.433474   47155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:15:23.433493   47155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-931571 NodeName:ha-931571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:15:23.433602   47155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-931571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:15:23.433622   47155 kube-vip.go:115] generating kube-vip config ...
	I1104 11:15:23.433680   47155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1104 11:15:23.445718   47155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1104 11:15:23.445843   47155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.5
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1104 11:15:23.445904   47155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:15:23.456101   47155 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:15:23.456229   47155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1104 11:15:23.465920   47155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1104 11:15:23.484546   47155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:15:23.502574   47155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1104 11:15:23.519299   47155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1104 11:15:23.537364   47155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1104 11:15:23.541370   47155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:15:23.685272   47155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:15:23.699164   47155 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571 for IP: 192.168.39.67
	I1104 11:15:23.699189   47155 certs.go:194] generating shared ca certs ...
	I1104 11:15:23.699210   47155 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:15:23.699410   47155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:15:23.699461   47155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:15:23.699472   47155 certs.go:256] generating profile certs ...
	I1104 11:15:23.699548   47155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/client.key
	I1104 11:15:23.699595   47155 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key.dd846fa0
	I1104 11:15:23.699627   47155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key
	I1104 11:15:23.699638   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:15:23.699651   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:15:23.699664   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:15:23.699676   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:15:23.699688   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:15:23.699707   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:15:23.699721   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:15:23.699733   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:15:23.699797   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:15:23.699834   47155 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:15:23.699846   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:15:23.699870   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:15:23.699892   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:15:23.699913   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:15:23.699958   47155 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:15:23.699984   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:23.699998   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:15:23.700009   47155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:15:23.700512   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:15:23.737076   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:15:23.760690   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:15:23.783795   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:15:23.807760   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1104 11:15:23.830826   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:15:23.854084   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:15:23.877130   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/ha-931571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:15:23.900357   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:15:23.923251   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:15:23.948861   47155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:15:23.972158   47155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:15:23.988716   47155 ssh_runner.go:195] Run: openssl version
	I1104 11:15:23.995042   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:15:24.005428   47155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:24.009786   47155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:24.009843   47155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:15:24.015353   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:15:24.024511   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:15:24.035047   47155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:15:24.039305   47155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:15:24.039370   47155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:15:24.044673   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:15:24.053335   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:15:24.064269   47155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:15:24.068439   47155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:15:24.068486   47155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:15:24.073853   47155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:15:24.084074   47155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:15:24.088827   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:15:24.094829   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:15:24.100416   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:15:24.106407   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:15:24.112126   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:15:24.118087   47155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:15:24.123465   47155 kubeadm.go:392] StartCluster: {Name:ha-931571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-931571 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:15:24.123582   47155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:15:24.123627   47155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:15:24.162201   47155 cri.go:89] found id: "980ffa884bf1968085bea9cf96dc67778f66830114d25dcc925b9353dd580cf0"
	I1104 11:15:24.162224   47155 cri.go:89] found id: "eebace40bf5a0e44ca473748c3ac74f32534455df431e58868b18dbd29a75bc4"
	I1104 11:15:24.162227   47155 cri.go:89] found id: "e3520e30168ef0fa6e24b3acd32cd2d4d097bddebb5882228350cb535bb460ae"
	I1104 11:15:24.162231   47155 cri.go:89] found id: "505870b9ae2df71f4b23cc11410c8e473700796ad8ea821e95ef0d6450e19c07"
	I1104 11:15:24.162233   47155 cri.go:89] found id: "4346e6f7bd5ae74e45e5a01ce9e39d75d483afe0668b46714d9f3c0ed51d039f"
	I1104 11:15:24.162276   47155 cri.go:89] found id: "85af9be253a6f51906cda8de0c24cfb3859c88d78b7562bc0a04c1a9ad88084a"
	I1104 11:15:24.162280   47155 cri.go:89] found id: "f541f4f56fff4e97bfb0486bf66e7d0b1fcb33be7981c634d6288968e0725d1d"
	I1104 11:15:24.162284   47155 cri.go:89] found id: "11e654338db96a4eec3bd67cfb6bc96567faa44e74ba8612875f6f460873f75e"
	I1104 11:15:24.162287   47155 cri.go:89] found id: "3f037008e4abae5272f0c56bf5d393effec74e71d2e8ec4f1dd2e34bc727e84a"
	I1104 11:15:24.162291   47155 cri.go:89] found id: "6a5842dbce0b9073ea1cdaf9f46490b0b17c2f8caba15eee43ef8bd7ae61031d"
	I1104 11:15:24.162295   47155 cri.go:89] found id: "b750e61badedb7752f13b56edeea37c41587c4575af18b26d813ab3901801e32"
	I1104 11:15:24.162297   47155 cri.go:89] found id: "4333666a18046abfcb00e2185256e5f4724ef5f36d6799de7e24f5c728cd786d"
	I1104 11:15:24.162300   47155 cri.go:89] found id: "d3514c60b5b4903f368258aac7561bc983b67f7e8198ca3fd59de321c7176c9f"
	I1104 11:15:24.162302   47155 cri.go:89] found id: "400aa38b5335627cc08143a9b2a5627b7fee85d555c2cc40a536ce98a76dc457"
	I1104 11:15:24.162306   47155 cri.go:89] found id: "49e75724c5eada14bded615bd1ba93f602ae8bcdf4c5ed95b646991a79dc403c"
	I1104 11:15:24.162308   47155 cri.go:89] found id: "4401315f385bf2a6e5d875c655d4d61ff68d76ccf428956355ffd500a9ce2bc0"
	I1104 11:15:24.162310   47155 cri.go:89] found id: "6e592fe17c5f71c11cf25a871efff88360e0232b8ea66e460109b68f40581ba8"
	I1104 11:15:24.162315   47155 cri.go:89] found id: "e50ab0290e7c2f22e6df511793236d28a6e0f3e1d5dbdcdd2e3447e4c26c2e6c"
	I1104 11:15:24.162318   47155 cri.go:89] found id: "f2d32daf142ba5b73e2f0491ea4ad911e8456739f191faf1cc001827eb91790c"
	I1104 11:15:24.162320   47155 cri.go:89] found id: ""
	I1104 11:15:24.162355   47155 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571
E1104 11:22:50.476277   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-931571 -n ha-931571: exit status 2 (14.783541074s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-931571" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (555.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (318.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453447
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-453447
E1104 11:31:33.167258   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-453447: exit status 82 (2m1.789109321s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-453447-m03"  ...
	* Stopping node "multinode-453447-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-453447" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453447 --wait=true -v=8 --alsologtostderr
E1104 11:34:47.409791   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453447 --wait=true -v=8 --alsologtostderr: (3m14.240347693s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453447
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-453447 -n multinode-453447
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-453447 logs -n 25: (1.877513717s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1426244323/001/cp-test_multinode-453447-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447:/home/docker/cp-test_multinode-453447-m02_multinode-453447.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447 sudo cat                                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m02_multinode-453447.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03:/home/docker/cp-test_multinode-453447-m02_multinode-453447-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447-m03 sudo cat                                   | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m02_multinode-453447-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp testdata/cp-test.txt                                                | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1426244323/001/cp-test_multinode-453447-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447:/home/docker/cp-test_multinode-453447-m03_multinode-453447.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447 sudo cat                                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m03_multinode-453447.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02:/home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447-m02 sudo cat                                   | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-453447 node stop m03                                                          | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	| node    | multinode-453447 node start                                                             | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-453447                                                                | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC |                     |
	| stop    | -p multinode-453447                                                                     | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC |                     |
	| start   | -p multinode-453447                                                                     | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:33 UTC | 04 Nov 24 11:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-453447                                                                | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:33:00
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:33:00.307427   56868 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:33:00.307528   56868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:33:00.307534   56868 out.go:358] Setting ErrFile to fd 2...
	I1104 11:33:00.307538   56868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:33:00.307743   56868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:33:00.308288   56868 out.go:352] Setting JSON to false
	I1104 11:33:00.309209   56868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8131,"bootTime":1730711849,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:33:00.309338   56868 start.go:139] virtualization: kvm guest
	I1104 11:33:00.311859   56868 out.go:177] * [multinode-453447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:33:00.313305   56868 notify.go:220] Checking for updates...
	I1104 11:33:00.313344   56868 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:33:00.314743   56868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:33:00.316041   56868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:33:00.317218   56868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:33:00.318346   56868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:33:00.319939   56868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:33:00.321677   56868 config.go:182] Loaded profile config "multinode-453447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:33:00.321803   56868 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:33:00.322287   56868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:33:00.322361   56868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:33:00.338441   56868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I1104 11:33:00.338942   56868 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:33:00.339542   56868 main.go:141] libmachine: Using API Version  1
	I1104 11:33:00.339574   56868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:33:00.339932   56868 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:33:00.340141   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:33:00.379904   56868 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:33:00.381370   56868 start.go:297] selected driver: kvm2
	I1104 11:33:00.381389   56868 start.go:901] validating driver "kvm2" against &{Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:33:00.381532   56868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:33:00.381936   56868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:33:00.382042   56868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:33:00.397318   56868 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:33:00.398050   56868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:33:00.398083   56868 cni.go:84] Creating CNI manager for ""
	I1104 11:33:00.398122   56868 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1104 11:33:00.398180   56868 start.go:340] cluster config:
	{Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-453447 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:33:00.398331   56868 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:33:00.400155   56868 out.go:177] * Starting "multinode-453447" primary control-plane node in "multinode-453447" cluster
	I1104 11:33:00.401322   56868 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:33:00.401362   56868 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:33:00.401369   56868 cache.go:56] Caching tarball of preloaded images
	I1104 11:33:00.401472   56868 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:33:00.401486   56868 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:33:00.401593   56868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/config.json ...
	I1104 11:33:00.401785   56868 start.go:360] acquireMachinesLock for multinode-453447: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:33:00.401838   56868 start.go:364] duration metric: took 32.474µs to acquireMachinesLock for "multinode-453447"
	I1104 11:33:00.401856   56868 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:33:00.401863   56868 fix.go:54] fixHost starting: 
	I1104 11:33:00.402162   56868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:33:00.402199   56868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:33:00.416872   56868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I1104 11:33:00.417452   56868 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:33:00.418001   56868 main.go:141] libmachine: Using API Version  1
	I1104 11:33:00.418034   56868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:33:00.418398   56868 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:33:00.418622   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:33:00.418776   56868 main.go:141] libmachine: (multinode-453447) Calling .GetState
	I1104 11:33:00.420363   56868 fix.go:112] recreateIfNeeded on multinode-453447: state=Running err=<nil>
	W1104 11:33:00.420382   56868 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:33:00.422278   56868 out.go:177] * Updating the running kvm2 "multinode-453447" VM ...
	I1104 11:33:00.423506   56868 machine.go:93] provisionDockerMachine start ...
	I1104 11:33:00.423529   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:33:00.423720   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.426436   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.426902   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.426934   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.427109   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.427302   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.427477   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.427629   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.427811   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:00.428041   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:00.428059   56868 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:33:00.530883   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453447
	
	I1104 11:33:00.530911   56868 main.go:141] libmachine: (multinode-453447) Calling .GetMachineName
	I1104 11:33:00.531153   56868 buildroot.go:166] provisioning hostname "multinode-453447"
	I1104 11:33:00.531185   56868 main.go:141] libmachine: (multinode-453447) Calling .GetMachineName
	I1104 11:33:00.531372   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.534263   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.534637   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.534670   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.534854   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.535006   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.535110   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.535182   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.535301   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:00.535455   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:00.535467   56868 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-453447 && echo "multinode-453447" | sudo tee /etc/hostname
	I1104 11:33:00.651985   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453447
	
	I1104 11:33:00.652012   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.655414   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.655834   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.655868   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.656075   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.656314   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.656543   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.656697   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.656893   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:00.657088   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:00.657105   56868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-453447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-453447/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-453447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:33:00.757796   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:33:00.757821   56868 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:33:00.757874   56868 buildroot.go:174] setting up certificates
	I1104 11:33:00.757885   56868 provision.go:84] configureAuth start
	I1104 11:33:00.757901   56868 main.go:141] libmachine: (multinode-453447) Calling .GetMachineName
	I1104 11:33:00.758214   56868 main.go:141] libmachine: (multinode-453447) Calling .GetIP
	I1104 11:33:00.760987   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.761391   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.761420   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.761598   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.763710   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.764083   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.764110   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.764264   56868 provision.go:143] copyHostCerts
	I1104 11:33:00.764305   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:33:00.764348   56868 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:33:00.764361   56868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:33:00.764439   56868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:33:00.764538   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:33:00.764563   56868 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:33:00.764573   56868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:33:00.764612   56868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:33:00.764674   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:33:00.764701   56868 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:33:00.764710   56868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:33:00.764741   56868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:33:00.764804   56868 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.multinode-453447 san=[127.0.0.1 192.168.39.86 localhost minikube multinode-453447]
	I1104 11:33:00.840596   56868 provision.go:177] copyRemoteCerts
	I1104 11:33:00.840650   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:33:00.840671   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.843196   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.843555   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.843577   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.843782   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.843946   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.844085   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.844197   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:33:00.924861   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:33:00.924955   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:33:00.948254   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:33:00.948344   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1104 11:33:00.972464   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:33:00.972537   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 11:33:00.996976   56868 provision.go:87] duration metric: took 239.073655ms to configureAuth
	I1104 11:33:00.997001   56868 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:33:00.997216   56868 config.go:182] Loaded profile config "multinode-453447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:33:00.997307   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:01.000005   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:01.000377   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:01.000415   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:01.000631   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:01.000827   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:01.000978   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:01.001121   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:01.001336   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:01.001495   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:01.001509   56868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:34:31.755042   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:34:31.755068   56868 machine.go:96] duration metric: took 1m31.331545824s to provisionDockerMachine
	I1104 11:34:31.755083   56868 start.go:293] postStartSetup for "multinode-453447" (driver="kvm2")
	I1104 11:34:31.755096   56868 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:34:31.755118   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.755449   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:34:31.755483   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.759004   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.759398   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.759429   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.759633   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.759820   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.759987   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.760099   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:34:31.839452   56868 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:34:31.843628   56868 command_runner.go:130] > NAME=Buildroot
	I1104 11:34:31.843650   56868 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1104 11:34:31.843654   56868 command_runner.go:130] > ID=buildroot
	I1104 11:34:31.843659   56868 command_runner.go:130] > VERSION_ID=2023.02.9
	I1104 11:34:31.843664   56868 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1104 11:34:31.843694   56868 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:34:31.843707   56868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:34:31.843778   56868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:34:31.843887   56868 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:34:31.843902   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:34:31.844014   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:34:31.853151   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:34:31.875996   56868 start.go:296] duration metric: took 120.898692ms for postStartSetup
	I1104 11:34:31.876036   56868 fix.go:56] duration metric: took 1m31.47417229s for fixHost
	I1104 11:34:31.876055   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.878925   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.879238   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.879269   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.879427   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.879637   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.879812   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.879919   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.880053   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:34:31.880205   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:34:31.880215   56868 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:34:31.981647   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730720071.955285285
	
	I1104 11:34:31.981671   56868 fix.go:216] guest clock: 1730720071.955285285
	I1104 11:34:31.981681   56868 fix.go:229] Guest: 2024-11-04 11:34:31.955285285 +0000 UTC Remote: 2024-11-04 11:34:31.876039456 +0000 UTC m=+91.609243146 (delta=79.245829ms)
	I1104 11:34:31.981703   56868 fix.go:200] guest clock delta is within tolerance: 79.245829ms
	I1104 11:34:31.981709   56868 start.go:83] releasing machines lock for "multinode-453447", held for 1m31.579859716s
	I1104 11:34:31.981734   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.981987   56868 main.go:141] libmachine: (multinode-453447) Calling .GetIP
	I1104 11:34:31.984410   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.984764   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.984792   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.984906   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.985474   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.985644   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.985729   56868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:34:31.985783   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.985878   56868 ssh_runner.go:195] Run: cat /version.json
	I1104 11:34:31.985903   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.988265   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988292   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988666   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.988692   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988790   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.988812   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.988838   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988930   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.988975   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.989062   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.989132   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.989239   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:34:31.989272   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.989393   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:34:32.062263   56868 command_runner.go:130] > {"iso_version": "v1.34.0-1730282777-19883", "kicbase_version": "v0.0.45-1730110049-19872", "minikube_version": "v1.34.0", "commit": "7738213fbe7cb3f4867f3e3b534798700ea0e3fb"}
	I1104 11:34:32.062508   56868 ssh_runner.go:195] Run: systemctl --version
	I1104 11:34:32.084680   56868 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1104 11:34:32.084739   56868 command_runner.go:130] > systemd 252 (252)
	I1104 11:34:32.084783   56868 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1104 11:34:32.084852   56868 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:34:32.238325   56868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1104 11:34:32.243709   56868 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1104 11:34:32.243988   56868 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:34:32.244049   56868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:34:32.253349   56868 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:34:32.253374   56868 start.go:495] detecting cgroup driver to use...
	I1104 11:34:32.253467   56868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:34:32.270096   56868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:34:32.283939   56868 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:34:32.284006   56868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:34:32.297777   56868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:34:32.311091   56868 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:34:32.452906   56868 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:34:32.593303   56868 docker.go:233] disabling docker service ...
	I1104 11:34:32.593372   56868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:34:32.609437   56868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:34:32.623148   56868 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:34:32.759863   56868 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:34:32.894902   56868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:34:32.909320   56868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:34:32.927969   56868 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1104 11:34:32.928317   56868 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:34:32.928384   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.938338   56868 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:34:32.938402   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.948054   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.958306   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.967911   56868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:34:32.977707   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.987273   56868 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.997778   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:33.007619   56868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:34:33.016324   56868 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1104 11:34:33.016522   56868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:34:33.025202   56868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:34:33.153694   56868 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:34:33.341608   56868 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:34:33.341673   56868 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:34:33.346284   56868 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1104 11:34:33.346310   56868 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1104 11:34:33.346317   56868 command_runner.go:130] > Device: 0,22	Inode: 1299        Links: 1
	I1104 11:34:33.346324   56868 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1104 11:34:33.346331   56868 command_runner.go:130] > Access: 2024-11-04 11:34:33.313126209 +0000
	I1104 11:34:33.346339   56868 command_runner.go:130] > Modify: 2024-11-04 11:34:33.212126632 +0000
	I1104 11:34:33.346347   56868 command_runner.go:130] > Change: 2024-11-04 11:34:33.212126632 +0000
	I1104 11:34:33.346367   56868 command_runner.go:130] >  Birth: -
	I1104 11:34:33.346396   56868 start.go:563] Will wait 60s for crictl version
	I1104 11:34:33.346457   56868 ssh_runner.go:195] Run: which crictl
	I1104 11:34:33.349793   56868 command_runner.go:130] > /usr/bin/crictl
	I1104 11:34:33.349845   56868 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:34:33.384071   56868 command_runner.go:130] > Version:  0.1.0
	I1104 11:34:33.384093   56868 command_runner.go:130] > RuntimeName:  cri-o
	I1104 11:34:33.384097   56868 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1104 11:34:33.384102   56868 command_runner.go:130] > RuntimeApiVersion:  v1
	I1104 11:34:33.385213   56868 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:34:33.385308   56868 ssh_runner.go:195] Run: crio --version
	I1104 11:34:33.415588   56868 command_runner.go:130] > crio version 1.29.1
	I1104 11:34:33.415609   56868 command_runner.go:130] > Version:        1.29.1
	I1104 11:34:33.415615   56868 command_runner.go:130] > GitCommit:      unknown
	I1104 11:34:33.415619   56868 command_runner.go:130] > GitCommitDate:  unknown
	I1104 11:34:33.415623   56868 command_runner.go:130] > GitTreeState:   clean
	I1104 11:34:33.415644   56868 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1104 11:34:33.415649   56868 command_runner.go:130] > GoVersion:      go1.21.6
	I1104 11:34:33.415653   56868 command_runner.go:130] > Compiler:       gc
	I1104 11:34:33.415657   56868 command_runner.go:130] > Platform:       linux/amd64
	I1104 11:34:33.415660   56868 command_runner.go:130] > Linkmode:       dynamic
	I1104 11:34:33.415664   56868 command_runner.go:130] > BuildTags:      
	I1104 11:34:33.415668   56868 command_runner.go:130] >   containers_image_ostree_stub
	I1104 11:34:33.415673   56868 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1104 11:34:33.415686   56868 command_runner.go:130] >   btrfs_noversion
	I1104 11:34:33.415696   56868 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1104 11:34:33.415703   56868 command_runner.go:130] >   libdm_no_deferred_remove
	I1104 11:34:33.415707   56868 command_runner.go:130] >   seccomp
	I1104 11:34:33.415712   56868 command_runner.go:130] > LDFlags:          unknown
	I1104 11:34:33.415719   56868 command_runner.go:130] > SeccompEnabled:   true
	I1104 11:34:33.415725   56868 command_runner.go:130] > AppArmorEnabled:  false
	I1104 11:34:33.416865   56868 ssh_runner.go:195] Run: crio --version
	I1104 11:34:33.447718   56868 command_runner.go:130] > crio version 1.29.1
	I1104 11:34:33.447741   56868 command_runner.go:130] > Version:        1.29.1
	I1104 11:34:33.447747   56868 command_runner.go:130] > GitCommit:      unknown
	I1104 11:34:33.447751   56868 command_runner.go:130] > GitCommitDate:  unknown
	I1104 11:34:33.447755   56868 command_runner.go:130] > GitTreeState:   clean
	I1104 11:34:33.447761   56868 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1104 11:34:33.447765   56868 command_runner.go:130] > GoVersion:      go1.21.6
	I1104 11:34:33.447768   56868 command_runner.go:130] > Compiler:       gc
	I1104 11:34:33.447773   56868 command_runner.go:130] > Platform:       linux/amd64
	I1104 11:34:33.447777   56868 command_runner.go:130] > Linkmode:       dynamic
	I1104 11:34:33.447782   56868 command_runner.go:130] > BuildTags:      
	I1104 11:34:33.447789   56868 command_runner.go:130] >   containers_image_ostree_stub
	I1104 11:34:33.447798   56868 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1104 11:34:33.447806   56868 command_runner.go:130] >   btrfs_noversion
	I1104 11:34:33.447814   56868 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1104 11:34:33.447824   56868 command_runner.go:130] >   libdm_no_deferred_remove
	I1104 11:34:33.447830   56868 command_runner.go:130] >   seccomp
	I1104 11:34:33.447834   56868 command_runner.go:130] > LDFlags:          unknown
	I1104 11:34:33.447838   56868 command_runner.go:130] > SeccompEnabled:   true
	I1104 11:34:33.447843   56868 command_runner.go:130] > AppArmorEnabled:  false
	I1104 11:34:33.449919   56868 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:34:33.451027   56868 main.go:141] libmachine: (multinode-453447) Calling .GetIP
	I1104 11:34:33.453521   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:33.453903   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:33.453937   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:33.454165   56868 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:34:33.458138   56868 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1104 11:34:33.458235   56868 kubeadm.go:883] updating cluster {Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:34:33.458395   56868 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:34:33.458453   56868 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:34:33.498558   56868 command_runner.go:130] > {
	I1104 11:34:33.498582   56868 command_runner.go:130] >   "images": [
	I1104 11:34:33.498587   56868 command_runner.go:130] >     {
	I1104 11:34:33.498594   56868 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1104 11:34:33.498598   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498603   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1104 11:34:33.498607   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498610   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498624   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1104 11:34:33.498635   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1104 11:34:33.498644   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498651   56868 command_runner.go:130] >       "size": "94965812",
	I1104 11:34:33.498658   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.498664   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.498671   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.498678   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.498682   56868 command_runner.go:130] >     },
	I1104 11:34:33.498684   56868 command_runner.go:130] >     {
	I1104 11:34:33.498690   56868 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1104 11:34:33.498697   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498702   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1104 11:34:33.498705   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498710   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498722   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1104 11:34:33.498738   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1104 11:34:33.498746   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498755   56868 command_runner.go:130] >       "size": "94958644",
	I1104 11:34:33.498764   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.498773   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.498779   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.498782   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.498786   56868 command_runner.go:130] >     },
	I1104 11:34:33.498791   56868 command_runner.go:130] >     {
	I1104 11:34:33.498797   56868 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1104 11:34:33.498801   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498809   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1104 11:34:33.498818   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498827   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498842   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1104 11:34:33.498857   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1104 11:34:33.498865   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498874   56868 command_runner.go:130] >       "size": "1363676",
	I1104 11:34:33.498882   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.498886   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.498891   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.498895   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.498902   56868 command_runner.go:130] >     },
	I1104 11:34:33.498907   56868 command_runner.go:130] >     {
	I1104 11:34:33.498920   56868 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1104 11:34:33.498931   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498942   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1104 11:34:33.498951   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498960   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498976   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1104 11:34:33.498991   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1104 11:34:33.499000   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499010   56868 command_runner.go:130] >       "size": "31470524",
	I1104 11:34:33.499020   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.499030   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499040   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499050   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499058   56868 command_runner.go:130] >     },
	I1104 11:34:33.499066   56868 command_runner.go:130] >     {
	I1104 11:34:33.499074   56868 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1104 11:34:33.499080   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499088   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1104 11:34:33.499096   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499103   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499118   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1104 11:34:33.499134   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1104 11:34:33.499142   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499152   56868 command_runner.go:130] >       "size": "63273227",
	I1104 11:34:33.499161   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.499169   56868 command_runner.go:130] >       "username": "nonroot",
	I1104 11:34:33.499178   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499198   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499203   56868 command_runner.go:130] >     },
	I1104 11:34:33.499212   56868 command_runner.go:130] >     {
	I1104 11:34:33.499224   56868 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1104 11:34:33.499233   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499243   56868 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1104 11:34:33.499252   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499265   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499275   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1104 11:34:33.499290   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1104 11:34:33.499299   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499307   56868 command_runner.go:130] >       "size": "149009664",
	I1104 11:34:33.499317   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499327   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499336   56868 command_runner.go:130] >       },
	I1104 11:34:33.499345   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499355   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499365   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499372   56868 command_runner.go:130] >     },
	I1104 11:34:33.499375   56868 command_runner.go:130] >     {
	I1104 11:34:33.499388   56868 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1104 11:34:33.499398   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499409   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1104 11:34:33.499419   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499428   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499443   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1104 11:34:33.499460   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1104 11:34:33.499467   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499472   56868 command_runner.go:130] >       "size": "95274464",
	I1104 11:34:33.499479   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499488   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499497   56868 command_runner.go:130] >       },
	I1104 11:34:33.499504   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499514   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499523   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499532   56868 command_runner.go:130] >     },
	I1104 11:34:33.499538   56868 command_runner.go:130] >     {
	I1104 11:34:33.499551   56868 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1104 11:34:33.499561   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499569   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1104 11:34:33.499576   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499583   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499608   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1104 11:34:33.499623   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1104 11:34:33.499629   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499639   56868 command_runner.go:130] >       "size": "89474374",
	I1104 11:34:33.499647   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499656   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499663   56868 command_runner.go:130] >       },
	I1104 11:34:33.499668   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499674   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499680   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499686   56868 command_runner.go:130] >     },
	I1104 11:34:33.499692   56868 command_runner.go:130] >     {
	I1104 11:34:33.499703   56868 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1104 11:34:33.499710   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499717   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1104 11:34:33.499726   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499736   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499749   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1104 11:34:33.499760   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1104 11:34:33.499769   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499779   56868 command_runner.go:130] >       "size": "92783513",
	I1104 11:34:33.499786   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.499796   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499805   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499814   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499823   56868 command_runner.go:130] >     },
	I1104 11:34:33.499831   56868 command_runner.go:130] >     {
	I1104 11:34:33.499841   56868 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1104 11:34:33.499849   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499854   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1104 11:34:33.499861   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499869   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499884   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1104 11:34:33.499899   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1104 11:34:33.499908   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499918   56868 command_runner.go:130] >       "size": "68457798",
	I1104 11:34:33.499928   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499938   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499945   56868 command_runner.go:130] >       },
	I1104 11:34:33.499949   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499955   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499964   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499972   56868 command_runner.go:130] >     },
	I1104 11:34:33.499978   56868 command_runner.go:130] >     {
	I1104 11:34:33.499991   56868 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1104 11:34:33.500001   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.500011   56868 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1104 11:34:33.500020   56868 command_runner.go:130] >       ],
	I1104 11:34:33.500030   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.500041   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1104 11:34:33.500054   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1104 11:34:33.500063   56868 command_runner.go:130] >       ],
	I1104 11:34:33.500070   56868 command_runner.go:130] >       "size": "742080",
	I1104 11:34:33.500079   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.500088   56868 command_runner.go:130] >         "value": "65535"
	I1104 11:34:33.500096   56868 command_runner.go:130] >       },
	I1104 11:34:33.500105   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.500113   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.500123   56868 command_runner.go:130] >       "pinned": true
	I1104 11:34:33.500128   56868 command_runner.go:130] >     }
	I1104 11:34:33.500135   56868 command_runner.go:130] >   ]
	I1104 11:34:33.500138   56868 command_runner.go:130] > }
	I1104 11:34:33.500315   56868 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:34:33.500326   56868 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:34:33.500379   56868 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:34:33.532356   56868 command_runner.go:130] > {
	I1104 11:34:33.532381   56868 command_runner.go:130] >   "images": [
	I1104 11:34:33.532387   56868 command_runner.go:130] >     {
	I1104 11:34:33.532397   56868 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1104 11:34:33.532403   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532412   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1104 11:34:33.532416   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532421   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532432   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1104 11:34:33.532445   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1104 11:34:33.532450   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532457   56868 command_runner.go:130] >       "size": "94965812",
	I1104 11:34:33.532463   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532470   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532483   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532491   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532497   56868 command_runner.go:130] >     },
	I1104 11:34:33.532503   56868 command_runner.go:130] >     {
	I1104 11:34:33.532514   56868 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1104 11:34:33.532523   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532532   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1104 11:34:33.532546   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532552   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532566   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1104 11:34:33.532581   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1104 11:34:33.532590   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532597   56868 command_runner.go:130] >       "size": "94958644",
	I1104 11:34:33.532604   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532616   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532625   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532632   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532638   56868 command_runner.go:130] >     },
	I1104 11:34:33.532643   56868 command_runner.go:130] >     {
	I1104 11:34:33.532654   56868 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1104 11:34:33.532663   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532672   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1104 11:34:33.532680   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532688   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532703   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1104 11:34:33.532718   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1104 11:34:33.532727   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532735   56868 command_runner.go:130] >       "size": "1363676",
	I1104 11:34:33.532745   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532755   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532774   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532784   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532789   56868 command_runner.go:130] >     },
	I1104 11:34:33.532794   56868 command_runner.go:130] >     {
	I1104 11:34:33.532804   56868 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1104 11:34:33.532814   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532826   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1104 11:34:33.532834   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532843   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532860   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1104 11:34:33.532880   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1104 11:34:33.532888   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532895   56868 command_runner.go:130] >       "size": "31470524",
	I1104 11:34:33.532901   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532910   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532917   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532927   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532933   56868 command_runner.go:130] >     },
	I1104 11:34:33.532940   56868 command_runner.go:130] >     {
	I1104 11:34:33.532953   56868 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1104 11:34:33.532964   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532975   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1104 11:34:33.532984   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532991   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533008   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1104 11:34:33.533024   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1104 11:34:33.533032   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533040   56868 command_runner.go:130] >       "size": "63273227",
	I1104 11:34:33.533049   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.533057   56868 command_runner.go:130] >       "username": "nonroot",
	I1104 11:34:33.533065   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533086   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533102   56868 command_runner.go:130] >     },
	I1104 11:34:33.533111   56868 command_runner.go:130] >     {
	I1104 11:34:33.533123   56868 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1104 11:34:33.533133   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533144   56868 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1104 11:34:33.533153   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533161   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533177   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1104 11:34:33.533197   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1104 11:34:33.533205   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533213   56868 command_runner.go:130] >       "size": "149009664",
	I1104 11:34:33.533222   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533240   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533252   56868 command_runner.go:130] >       },
	I1104 11:34:33.533261   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533270   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533278   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533286   56868 command_runner.go:130] >     },
	I1104 11:34:33.533293   56868 command_runner.go:130] >     {
	I1104 11:34:33.533307   56868 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1104 11:34:33.533316   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533325   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1104 11:34:33.533333   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533341   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533356   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1104 11:34:33.533372   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1104 11:34:33.533382   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533393   56868 command_runner.go:130] >       "size": "95274464",
	I1104 11:34:33.533402   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533410   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533418   56868 command_runner.go:130] >       },
	I1104 11:34:33.533426   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533437   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533445   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533453   56868 command_runner.go:130] >     },
	I1104 11:34:33.533459   56868 command_runner.go:130] >     {
	I1104 11:34:33.533473   56868 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1104 11:34:33.533483   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533494   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1104 11:34:33.533502   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533510   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533534   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1104 11:34:33.533550   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1104 11:34:33.533558   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533566   56868 command_runner.go:130] >       "size": "89474374",
	I1104 11:34:33.533576   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533586   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533592   56868 command_runner.go:130] >       },
	I1104 11:34:33.533600   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533607   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533617   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533623   56868 command_runner.go:130] >     },
	I1104 11:34:33.533631   56868 command_runner.go:130] >     {
	I1104 11:34:33.533642   56868 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1104 11:34:33.533650   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533660   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1104 11:34:33.533669   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533677   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533693   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1104 11:34:33.533711   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1104 11:34:33.533720   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533728   56868 command_runner.go:130] >       "size": "92783513",
	I1104 11:34:33.533737   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.533744   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533751   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533762   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533770   56868 command_runner.go:130] >     },
	I1104 11:34:33.533778   56868 command_runner.go:130] >     {
	I1104 11:34:33.533789   56868 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1104 11:34:33.533799   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533809   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1104 11:34:33.533817   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533825   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533840   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1104 11:34:33.533855   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1104 11:34:33.533865   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533872   56868 command_runner.go:130] >       "size": "68457798",
	I1104 11:34:33.533881   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533889   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533897   56868 command_runner.go:130] >       },
	I1104 11:34:33.533903   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533913   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533920   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533928   56868 command_runner.go:130] >     },
	I1104 11:34:33.533935   56868 command_runner.go:130] >     {
	I1104 11:34:33.533949   56868 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1104 11:34:33.533959   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533970   56868 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1104 11:34:33.533978   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533984   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533999   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1104 11:34:33.534017   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1104 11:34:33.534024   56868 command_runner.go:130] >       ],
	I1104 11:34:33.534032   56868 command_runner.go:130] >       "size": "742080",
	I1104 11:34:33.534041   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.534049   56868 command_runner.go:130] >         "value": "65535"
	I1104 11:34:33.534057   56868 command_runner.go:130] >       },
	I1104 11:34:33.534065   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.534073   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.534084   56868 command_runner.go:130] >       "pinned": true
	I1104 11:34:33.534092   56868 command_runner.go:130] >     }
	I1104 11:34:33.534098   56868 command_runner.go:130] >   ]
	I1104 11:34:33.534107   56868 command_runner.go:130] > }
	I1104 11:34:33.534245   56868 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:34:33.534257   56868 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:34:33.534265   56868 kubeadm.go:934] updating node { 192.168.39.86 8443 v1.31.2 crio true true} ...
	I1104 11:34:33.534381   56868 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-453447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:34:33.534471   56868 ssh_runner.go:195] Run: crio config
	I1104 11:34:33.644071   56868 command_runner.go:130] ! time="2024-11-04 11:34:33.617697441Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1104 11:34:33.652825   56868 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1104 11:34:33.660534   56868 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1104 11:34:33.660555   56868 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1104 11:34:33.660561   56868 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1104 11:34:33.660565   56868 command_runner.go:130] > #
	I1104 11:34:33.660571   56868 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1104 11:34:33.660578   56868 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1104 11:34:33.660583   56868 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1104 11:34:33.660592   56868 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1104 11:34:33.660597   56868 command_runner.go:130] > # reload'.
	I1104 11:34:33.660606   56868 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1104 11:34:33.660615   56868 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1104 11:34:33.660629   56868 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1104 11:34:33.660638   56868 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1104 11:34:33.660642   56868 command_runner.go:130] > [crio]
	I1104 11:34:33.660648   56868 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1104 11:34:33.660656   56868 command_runner.go:130] > # containers images, in this directory.
	I1104 11:34:33.660660   56868 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1104 11:34:33.660671   56868 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1104 11:34:33.660676   56868 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1104 11:34:33.660683   56868 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1104 11:34:33.660688   56868 command_runner.go:130] > # imagestore = ""
	I1104 11:34:33.660703   56868 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1104 11:34:33.660717   56868 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1104 11:34:33.660725   56868 command_runner.go:130] > storage_driver = "overlay"
	I1104 11:34:33.660735   56868 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1104 11:34:33.660748   56868 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1104 11:34:33.660758   56868 command_runner.go:130] > storage_option = [
	I1104 11:34:33.660765   56868 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1104 11:34:33.660769   56868 command_runner.go:130] > ]
	I1104 11:34:33.660777   56868 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1104 11:34:33.660785   56868 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1104 11:34:33.660789   56868 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1104 11:34:33.660797   56868 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1104 11:34:33.660803   56868 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1104 11:34:33.660813   56868 command_runner.go:130] > # always happen on a node reboot
	I1104 11:34:33.660825   56868 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1104 11:34:33.660842   56868 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1104 11:34:33.660854   56868 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1104 11:34:33.660863   56868 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1104 11:34:33.660871   56868 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1104 11:34:33.660880   56868 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1104 11:34:33.660889   56868 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1104 11:34:33.660895   56868 command_runner.go:130] > # internal_wipe = true
	I1104 11:34:33.660906   56868 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1104 11:34:33.660919   56868 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1104 11:34:33.660928   56868 command_runner.go:130] > # internal_repair = false
	I1104 11:34:33.660940   56868 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1104 11:34:33.660952   56868 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1104 11:34:33.660963   56868 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1104 11:34:33.660971   56868 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1104 11:34:33.660979   56868 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1104 11:34:33.660986   56868 command_runner.go:130] > [crio.api]
	I1104 11:34:33.660995   56868 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1104 11:34:33.661005   56868 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1104 11:34:33.661019   56868 command_runner.go:130] > # IP address on which the stream server will listen.
	I1104 11:34:33.661029   56868 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1104 11:34:33.661042   56868 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1104 11:34:33.661053   56868 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1104 11:34:33.661063   56868 command_runner.go:130] > # stream_port = "0"
	I1104 11:34:33.661073   56868 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1104 11:34:33.661079   56868 command_runner.go:130] > # stream_enable_tls = false
	I1104 11:34:33.661088   56868 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1104 11:34:33.661097   56868 command_runner.go:130] > # stream_idle_timeout = ""
	I1104 11:34:33.661114   56868 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1104 11:34:33.661126   56868 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1104 11:34:33.661135   56868 command_runner.go:130] > # minutes.
	I1104 11:34:33.661143   56868 command_runner.go:130] > # stream_tls_cert = ""
	I1104 11:34:33.661155   56868 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1104 11:34:33.661165   56868 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1104 11:34:33.661173   56868 command_runner.go:130] > # stream_tls_key = ""
	I1104 11:34:33.661190   56868 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1104 11:34:33.661205   56868 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1104 11:34:33.661221   56868 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1104 11:34:33.661241   56868 command_runner.go:130] > # stream_tls_ca = ""
	I1104 11:34:33.661256   56868 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1104 11:34:33.661266   56868 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1104 11:34:33.661278   56868 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1104 11:34:33.661287   56868 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1104 11:34:33.661300   56868 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1104 11:34:33.661313   56868 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1104 11:34:33.661322   56868 command_runner.go:130] > [crio.runtime]
	I1104 11:34:33.661334   56868 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1104 11:34:33.661345   56868 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1104 11:34:33.661354   56868 command_runner.go:130] > # "nofile=1024:2048"
	I1104 11:34:33.661366   56868 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1104 11:34:33.661373   56868 command_runner.go:130] > # default_ulimits = [
	I1104 11:34:33.661378   56868 command_runner.go:130] > # ]
	I1104 11:34:33.661400   56868 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1104 11:34:33.661410   56868 command_runner.go:130] > # no_pivot = false
	I1104 11:34:33.661419   56868 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1104 11:34:33.661432   56868 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1104 11:34:33.661443   56868 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1104 11:34:33.661453   56868 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1104 11:34:33.661463   56868 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1104 11:34:33.661476   56868 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1104 11:34:33.661484   56868 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1104 11:34:33.661489   56868 command_runner.go:130] > # Cgroup setting for conmon
	I1104 11:34:33.661503   56868 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1104 11:34:33.661512   56868 command_runner.go:130] > conmon_cgroup = "pod"
	I1104 11:34:33.661522   56868 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1104 11:34:33.661534   56868 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1104 11:34:33.661551   56868 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1104 11:34:33.661559   56868 command_runner.go:130] > conmon_env = [
	I1104 11:34:33.661571   56868 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1104 11:34:33.661579   56868 command_runner.go:130] > ]
	I1104 11:34:33.661588   56868 command_runner.go:130] > # Additional environment variables to set for all the
	I1104 11:34:33.661597   56868 command_runner.go:130] > # containers. These are overridden if set in the
	I1104 11:34:33.661610   56868 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1104 11:34:33.661621   56868 command_runner.go:130] > # default_env = [
	I1104 11:34:33.661629   56868 command_runner.go:130] > # ]
	I1104 11:34:33.661644   56868 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1104 11:34:33.661658   56868 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1104 11:34:33.661667   56868 command_runner.go:130] > # selinux = false
	I1104 11:34:33.661678   56868 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1104 11:34:33.661687   56868 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1104 11:34:33.661700   56868 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1104 11:34:33.661709   56868 command_runner.go:130] > # seccomp_profile = ""
	I1104 11:34:33.661719   56868 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1104 11:34:33.661731   56868 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1104 11:34:33.661744   56868 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1104 11:34:33.661755   56868 command_runner.go:130] > # which might increase security.
	I1104 11:34:33.661765   56868 command_runner.go:130] > # This option is currently deprecated,
	I1104 11:34:33.661777   56868 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1104 11:34:33.661784   56868 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1104 11:34:33.661793   56868 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1104 11:34:33.661807   56868 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1104 11:34:33.661819   56868 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1104 11:34:33.661833   56868 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1104 11:34:33.661844   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.661854   56868 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1104 11:34:33.661865   56868 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1104 11:34:33.661871   56868 command_runner.go:130] > # the cgroup blockio controller.
	I1104 11:34:33.661878   56868 command_runner.go:130] > # blockio_config_file = ""
	I1104 11:34:33.661892   56868 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1104 11:34:33.661902   56868 command_runner.go:130] > # blockio parameters.
	I1104 11:34:33.661912   56868 command_runner.go:130] > # blockio_reload = false
	I1104 11:34:33.661925   56868 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1104 11:34:33.661935   56868 command_runner.go:130] > # irqbalance daemon.
	I1104 11:34:33.661946   56868 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1104 11:34:33.661958   56868 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1104 11:34:33.661971   56868 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1104 11:34:33.661985   56868 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1104 11:34:33.661999   56868 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1104 11:34:33.662012   56868 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1104 11:34:33.662023   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.662033   56868 command_runner.go:130] > # rdt_config_file = ""
	I1104 11:34:33.662044   56868 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1104 11:34:33.662052   56868 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1104 11:34:33.662074   56868 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1104 11:34:33.662085   56868 command_runner.go:130] > # separate_pull_cgroup = ""
	I1104 11:34:33.662095   56868 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1104 11:34:33.662108   56868 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1104 11:34:33.662117   56868 command_runner.go:130] > # will be added.
	I1104 11:34:33.662127   56868 command_runner.go:130] > # default_capabilities = [
	I1104 11:34:33.662135   56868 command_runner.go:130] > # 	"CHOWN",
	I1104 11:34:33.662143   56868 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1104 11:34:33.662151   56868 command_runner.go:130] > # 	"FSETID",
	I1104 11:34:33.662157   56868 command_runner.go:130] > # 	"FOWNER",
	I1104 11:34:33.662162   56868 command_runner.go:130] > # 	"SETGID",
	I1104 11:34:33.662171   56868 command_runner.go:130] > # 	"SETUID",
	I1104 11:34:33.662185   56868 command_runner.go:130] > # 	"SETPCAP",
	I1104 11:34:33.662195   56868 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1104 11:34:33.662204   56868 command_runner.go:130] > # 	"KILL",
	I1104 11:34:33.662213   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662227   56868 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1104 11:34:33.662241   56868 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1104 11:34:33.662250   56868 command_runner.go:130] > # add_inheritable_capabilities = false
	I1104 11:34:33.662261   56868 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1104 11:34:33.662274   56868 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1104 11:34:33.662284   56868 command_runner.go:130] > default_sysctls = [
	I1104 11:34:33.662292   56868 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1104 11:34:33.662301   56868 command_runner.go:130] > ]
	I1104 11:34:33.662311   56868 command_runner.go:130] > # List of devices on the host that a
	I1104 11:34:33.662324   56868 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1104 11:34:33.662333   56868 command_runner.go:130] > # allowed_devices = [
	I1104 11:34:33.662342   56868 command_runner.go:130] > # 	"/dev/fuse",
	I1104 11:34:33.662349   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662355   56868 command_runner.go:130] > # List of additional devices. specified as
	I1104 11:34:33.662367   56868 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1104 11:34:33.662379   56868 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1104 11:34:33.662395   56868 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1104 11:34:33.662405   56868 command_runner.go:130] > # additional_devices = [
	I1104 11:34:33.662413   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662424   56868 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1104 11:34:33.662433   56868 command_runner.go:130] > # cdi_spec_dirs = [
	I1104 11:34:33.662441   56868 command_runner.go:130] > # 	"/etc/cdi",
	I1104 11:34:33.662447   56868 command_runner.go:130] > # 	"/var/run/cdi",
	I1104 11:34:33.662455   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662468   56868 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1104 11:34:33.662482   56868 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1104 11:34:33.662491   56868 command_runner.go:130] > # Defaults to false.
	I1104 11:34:33.662503   56868 command_runner.go:130] > # device_ownership_from_security_context = false
	I1104 11:34:33.662515   56868 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1104 11:34:33.662528   56868 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1104 11:34:33.662535   56868 command_runner.go:130] > # hooks_dir = [
	I1104 11:34:33.662540   56868 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1104 11:34:33.662548   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662561   56868 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1104 11:34:33.662574   56868 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1104 11:34:33.662585   56868 command_runner.go:130] > # its default mounts from the following two files:
	I1104 11:34:33.662592   56868 command_runner.go:130] > #
	I1104 11:34:33.662605   56868 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1104 11:34:33.662617   56868 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1104 11:34:33.662626   56868 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1104 11:34:33.662634   56868 command_runner.go:130] > #
	I1104 11:34:33.662643   56868 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1104 11:34:33.662656   56868 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1104 11:34:33.662670   56868 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1104 11:34:33.662682   56868 command_runner.go:130] > #      only add mounts it finds in this file.
	I1104 11:34:33.662690   56868 command_runner.go:130] > #
	I1104 11:34:33.662699   56868 command_runner.go:130] > # default_mounts_file = ""
	I1104 11:34:33.662710   56868 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1104 11:34:33.662723   56868 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1104 11:34:33.662729   56868 command_runner.go:130] > pids_limit = 1024
	I1104 11:34:33.662739   56868 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1104 11:34:33.662753   56868 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1104 11:34:33.662766   56868 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1104 11:34:33.662782   56868 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1104 11:34:33.662792   56868 command_runner.go:130] > # log_size_max = -1
	I1104 11:34:33.662806   56868 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1104 11:34:33.662817   56868 command_runner.go:130] > # log_to_journald = false
	I1104 11:34:33.662826   56868 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1104 11:34:33.662837   56868 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1104 11:34:33.662849   56868 command_runner.go:130] > # Path to directory for container attach sockets.
	I1104 11:34:33.662861   56868 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1104 11:34:33.662872   56868 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1104 11:34:33.662882   56868 command_runner.go:130] > # bind_mount_prefix = ""
	I1104 11:34:33.662894   56868 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1104 11:34:33.662903   56868 command_runner.go:130] > # read_only = false
	I1104 11:34:33.662913   56868 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1104 11:34:33.662922   56868 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1104 11:34:33.662931   56868 command_runner.go:130] > # live configuration reload.
	I1104 11:34:33.662940   56868 command_runner.go:130] > # log_level = "info"
	I1104 11:34:33.662950   56868 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1104 11:34:33.662962   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.662972   56868 command_runner.go:130] > # log_filter = ""
	I1104 11:34:33.662985   56868 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1104 11:34:33.662999   56868 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1104 11:34:33.663007   56868 command_runner.go:130] > # separated by comma.
	I1104 11:34:33.663018   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663027   56868 command_runner.go:130] > # uid_mappings = ""
	I1104 11:34:33.663040   56868 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1104 11:34:33.663053   56868 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1104 11:34:33.663063   56868 command_runner.go:130] > # separated by comma.
	I1104 11:34:33.663078   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663087   56868 command_runner.go:130] > # gid_mappings = ""
	I1104 11:34:33.663100   56868 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1104 11:34:33.663109   56868 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1104 11:34:33.663121   56868 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1104 11:34:33.663137   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663147   56868 command_runner.go:130] > # minimum_mappable_uid = -1
	I1104 11:34:33.663159   56868 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1104 11:34:33.663172   56868 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1104 11:34:33.663188   56868 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1104 11:34:33.663199   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663211   56868 command_runner.go:130] > # minimum_mappable_gid = -1
	I1104 11:34:33.663224   56868 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1104 11:34:33.663236   56868 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1104 11:34:33.663249   56868 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1104 11:34:33.663258   56868 command_runner.go:130] > # ctr_stop_timeout = 30
	I1104 11:34:33.663271   56868 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1104 11:34:33.663282   56868 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1104 11:34:33.663289   56868 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1104 11:34:33.663297   56868 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1104 11:34:33.663308   56868 command_runner.go:130] > drop_infra_ctr = false
	I1104 11:34:33.663321   56868 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1104 11:34:33.663332   56868 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1104 11:34:33.663346   56868 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1104 11:34:33.663356   56868 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1104 11:34:33.663369   56868 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1104 11:34:33.663377   56868 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1104 11:34:33.663389   56868 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1104 11:34:33.663401   56868 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1104 11:34:33.663411   56868 command_runner.go:130] > # shared_cpuset = ""
	I1104 11:34:33.663423   56868 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1104 11:34:33.663433   56868 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1104 11:34:33.663443   56868 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1104 11:34:33.663454   56868 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1104 11:34:33.663463   56868 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1104 11:34:33.663471   56868 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1104 11:34:33.663483   56868 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1104 11:34:33.663495   56868 command_runner.go:130] > # enable_criu_support = false
	I1104 11:34:33.663506   56868 command_runner.go:130] > # Enable/disable the generation of the container,
	I1104 11:34:33.663519   56868 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1104 11:34:33.663529   56868 command_runner.go:130] > # enable_pod_events = false
	I1104 11:34:33.663543   56868 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1104 11:34:33.663555   56868 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1104 11:34:33.663564   56868 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1104 11:34:33.663572   56868 command_runner.go:130] > # default_runtime = "runc"
	I1104 11:34:33.663583   56868 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1104 11:34:33.663598   56868 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1104 11:34:33.663616   56868 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1104 11:34:33.663631   56868 command_runner.go:130] > # creation as a file is not desired either.
	I1104 11:34:33.663647   56868 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1104 11:34:33.663657   56868 command_runner.go:130] > # the hostname is being managed dynamically.
	I1104 11:34:33.663664   56868 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1104 11:34:33.663668   56868 command_runner.go:130] > # ]
	I1104 11:34:33.663681   56868 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1104 11:34:33.663695   56868 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1104 11:34:33.663708   56868 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1104 11:34:33.663719   56868 command_runner.go:130] > # Each entry in the table should follow the format:
	I1104 11:34:33.663727   56868 command_runner.go:130] > #
	I1104 11:34:33.663735   56868 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1104 11:34:33.663745   56868 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1104 11:34:33.663770   56868 command_runner.go:130] > # runtime_type = "oci"
	I1104 11:34:33.663780   56868 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1104 11:34:33.663791   56868 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1104 11:34:33.663798   56868 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1104 11:34:33.663809   56868 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1104 11:34:33.663818   56868 command_runner.go:130] > # monitor_env = []
	I1104 11:34:33.663828   56868 command_runner.go:130] > # privileged_without_host_devices = false
	I1104 11:34:33.663837   56868 command_runner.go:130] > # allowed_annotations = []
	I1104 11:34:33.663850   56868 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1104 11:34:33.663858   56868 command_runner.go:130] > # Where:
	I1104 11:34:33.663863   56868 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1104 11:34:33.663875   56868 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1104 11:34:33.663888   56868 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1104 11:34:33.663901   56868 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1104 11:34:33.663911   56868 command_runner.go:130] > #   in $PATH.
	I1104 11:34:33.663924   56868 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1104 11:34:33.663934   56868 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1104 11:34:33.663947   56868 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1104 11:34:33.663953   56868 command_runner.go:130] > #   state.
	I1104 11:34:33.663961   56868 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1104 11:34:33.663973   56868 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1104 11:34:33.663988   56868 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1104 11:34:33.664000   56868 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1104 11:34:33.664013   56868 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1104 11:34:33.664027   56868 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1104 11:34:33.664041   56868 command_runner.go:130] > #   The currently recognized values are:
	I1104 11:34:33.664052   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1104 11:34:33.664063   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1104 11:34:33.664076   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1104 11:34:33.664088   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1104 11:34:33.664104   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1104 11:34:33.664117   56868 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1104 11:34:33.664131   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1104 11:34:33.664143   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1104 11:34:33.664154   56868 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1104 11:34:33.664163   56868 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1104 11:34:33.664174   56868 command_runner.go:130] > #   deprecated option "conmon".
	I1104 11:34:33.664192   56868 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1104 11:34:33.664204   56868 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1104 11:34:33.664218   56868 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1104 11:34:33.664229   56868 command_runner.go:130] > #   should be moved to the container's cgroup
	I1104 11:34:33.664242   56868 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1104 11:34:33.664252   56868 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1104 11:34:33.664262   56868 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1104 11:34:33.664272   56868 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1104 11:34:33.664281   56868 command_runner.go:130] > #
	I1104 11:34:33.664289   56868 command_runner.go:130] > # Using the seccomp notifier feature:
	I1104 11:34:33.664300   56868 command_runner.go:130] > #
	I1104 11:34:33.664313   56868 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1104 11:34:33.664326   56868 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1104 11:34:33.664333   56868 command_runner.go:130] > #
	I1104 11:34:33.664346   56868 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1104 11:34:33.664355   56868 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1104 11:34:33.664362   56868 command_runner.go:130] > #
	I1104 11:34:33.664372   56868 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1104 11:34:33.664382   56868 command_runner.go:130] > # feature.
	I1104 11:34:33.664388   56868 command_runner.go:130] > #
	I1104 11:34:33.664401   56868 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1104 11:34:33.664413   56868 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1104 11:34:33.664427   56868 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1104 11:34:33.664443   56868 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1104 11:34:33.664453   56868 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1104 11:34:33.664459   56868 command_runner.go:130] > #
	I1104 11:34:33.664469   56868 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1104 11:34:33.664482   56868 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1104 11:34:33.664490   56868 command_runner.go:130] > #
	I1104 11:34:33.664504   56868 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1104 11:34:33.664516   56868 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1104 11:34:33.664524   56868 command_runner.go:130] > #
	I1104 11:34:33.664535   56868 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1104 11:34:33.664546   56868 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1104 11:34:33.664553   56868 command_runner.go:130] > # limitation.
	I1104 11:34:33.664560   56868 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1104 11:34:33.664570   56868 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1104 11:34:33.664580   56868 command_runner.go:130] > runtime_type = "oci"
	I1104 11:34:33.664589   56868 command_runner.go:130] > runtime_root = "/run/runc"
	I1104 11:34:33.664598   56868 command_runner.go:130] > runtime_config_path = ""
	I1104 11:34:33.664609   56868 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1104 11:34:33.664618   56868 command_runner.go:130] > monitor_cgroup = "pod"
	I1104 11:34:33.664628   56868 command_runner.go:130] > monitor_exec_cgroup = ""
	I1104 11:34:33.664635   56868 command_runner.go:130] > monitor_env = [
	I1104 11:34:33.664642   56868 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1104 11:34:33.664650   56868 command_runner.go:130] > ]
	I1104 11:34:33.664661   56868 command_runner.go:130] > privileged_without_host_devices = false
	I1104 11:34:33.664674   56868 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1104 11:34:33.664685   56868 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1104 11:34:33.664698   56868 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1104 11:34:33.664713   56868 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1104 11:34:33.664727   56868 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1104 11:34:33.664734   56868 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1104 11:34:33.664754   56868 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1104 11:34:33.664770   56868 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1104 11:34:33.664780   56868 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1104 11:34:33.664792   56868 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1104 11:34:33.664798   56868 command_runner.go:130] > # Example:
	I1104 11:34:33.664805   56868 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1104 11:34:33.664813   56868 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1104 11:34:33.664828   56868 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1104 11:34:33.664834   56868 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1104 11:34:33.664838   56868 command_runner.go:130] > # cpuset = 0
	I1104 11:34:33.664843   56868 command_runner.go:130] > # cpushares = "0-1"
	I1104 11:34:33.664848   56868 command_runner.go:130] > # Where:
	I1104 11:34:33.664855   56868 command_runner.go:130] > # The workload name is workload-type.
	I1104 11:34:33.664866   56868 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1104 11:34:33.664875   56868 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1104 11:34:33.664883   56868 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1104 11:34:33.664895   56868 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1104 11:34:33.664905   56868 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1104 11:34:33.664912   56868 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1104 11:34:33.664920   56868 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1104 11:34:33.664924   56868 command_runner.go:130] > # Default value is set to true
	I1104 11:34:33.664930   56868 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1104 11:34:33.664939   56868 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1104 11:34:33.664948   56868 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1104 11:34:33.664956   56868 command_runner.go:130] > # Default value is set to 'false'
	I1104 11:34:33.664967   56868 command_runner.go:130] > # disable_hostport_mapping = false
	I1104 11:34:33.664981   56868 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1104 11:34:33.664988   56868 command_runner.go:130] > #
	I1104 11:34:33.664998   56868 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1104 11:34:33.665007   56868 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1104 11:34:33.665016   56868 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1104 11:34:33.665029   56868 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1104 11:34:33.665041   56868 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1104 11:34:33.665049   56868 command_runner.go:130] > [crio.image]
	I1104 11:34:33.665062   56868 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1104 11:34:33.665071   56868 command_runner.go:130] > # default_transport = "docker://"
	I1104 11:34:33.665084   56868 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1104 11:34:33.665093   56868 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1104 11:34:33.665102   56868 command_runner.go:130] > # global_auth_file = ""
	I1104 11:34:33.665113   56868 command_runner.go:130] > # The image used to instantiate infra containers.
	I1104 11:34:33.665123   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.665134   56868 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1104 11:34:33.665148   56868 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1104 11:34:33.665159   56868 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1104 11:34:33.665170   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.665185   56868 command_runner.go:130] > # pause_image_auth_file = ""
	I1104 11:34:33.665197   56868 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1104 11:34:33.665209   56868 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1104 11:34:33.665222   56868 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1104 11:34:33.665247   56868 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1104 11:34:33.665257   56868 command_runner.go:130] > # pause_command = "/pause"
	I1104 11:34:33.665267   56868 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1104 11:34:33.665280   56868 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1104 11:34:33.665292   56868 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1104 11:34:33.665306   56868 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1104 11:34:33.665319   56868 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1104 11:34:33.665333   56868 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1104 11:34:33.665342   56868 command_runner.go:130] > # pinned_images = [
	I1104 11:34:33.665350   56868 command_runner.go:130] > # ]
	I1104 11:34:33.665362   56868 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1104 11:34:33.665372   56868 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1104 11:34:33.665382   56868 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1104 11:34:33.665395   56868 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1104 11:34:33.665407   56868 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1104 11:34:33.665417   56868 command_runner.go:130] > # signature_policy = ""
	I1104 11:34:33.665428   56868 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1104 11:34:33.665441   56868 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1104 11:34:33.665451   56868 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1104 11:34:33.665464   56868 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1104 11:34:33.665475   56868 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1104 11:34:33.665484   56868 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1104 11:34:33.665496   56868 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1104 11:34:33.665528   56868 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1104 11:34:33.665540   56868 command_runner.go:130] > # changing them here.
	I1104 11:34:33.665546   56868 command_runner.go:130] > # insecure_registries = [
	I1104 11:34:33.665552   56868 command_runner.go:130] > # ]
	I1104 11:34:33.665564   56868 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1104 11:34:33.665574   56868 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1104 11:34:33.665581   56868 command_runner.go:130] > # image_volumes = "mkdir"
	I1104 11:34:33.665589   56868 command_runner.go:130] > # Temporary directory to use for storing big files
	I1104 11:34:33.665598   56868 command_runner.go:130] > # big_files_temporary_dir = ""
	I1104 11:34:33.665614   56868 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1104 11:34:33.665623   56868 command_runner.go:130] > # CNI plugins.
	I1104 11:34:33.665632   56868 command_runner.go:130] > [crio.network]
	I1104 11:34:33.665643   56868 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1104 11:34:33.665655   56868 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1104 11:34:33.665663   56868 command_runner.go:130] > # cni_default_network = ""
	I1104 11:34:33.665669   56868 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1104 11:34:33.665679   56868 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1104 11:34:33.665693   56868 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1104 11:34:33.665703   56868 command_runner.go:130] > # plugin_dirs = [
	I1104 11:34:33.665712   56868 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1104 11:34:33.665720   56868 command_runner.go:130] > # ]
	I1104 11:34:33.665733   56868 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1104 11:34:33.665741   56868 command_runner.go:130] > [crio.metrics]
	I1104 11:34:33.665752   56868 command_runner.go:130] > # Globally enable or disable metrics support.
	I1104 11:34:33.665759   56868 command_runner.go:130] > enable_metrics = true
	I1104 11:34:33.665764   56868 command_runner.go:130] > # Specify enabled metrics collectors.
	I1104 11:34:33.665773   56868 command_runner.go:130] > # Per default all metrics are enabled.
	I1104 11:34:33.665786   56868 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1104 11:34:33.665799   56868 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1104 11:34:33.665812   56868 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1104 11:34:33.665821   56868 command_runner.go:130] > # metrics_collectors = [
	I1104 11:34:33.665829   56868 command_runner.go:130] > # 	"operations",
	I1104 11:34:33.665840   56868 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1104 11:34:33.665849   56868 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1104 11:34:33.665855   56868 command_runner.go:130] > # 	"operations_errors",
	I1104 11:34:33.665861   56868 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1104 11:34:33.665871   56868 command_runner.go:130] > # 	"image_pulls_by_name",
	I1104 11:34:33.665882   56868 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1104 11:34:33.665892   56868 command_runner.go:130] > # 	"image_pulls_failures",
	I1104 11:34:33.665901   56868 command_runner.go:130] > # 	"image_pulls_successes",
	I1104 11:34:33.665911   56868 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1104 11:34:33.665920   56868 command_runner.go:130] > # 	"image_layer_reuse",
	I1104 11:34:33.665930   56868 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1104 11:34:33.665940   56868 command_runner.go:130] > # 	"containers_oom_total",
	I1104 11:34:33.665948   56868 command_runner.go:130] > # 	"containers_oom",
	I1104 11:34:33.665952   56868 command_runner.go:130] > # 	"processes_defunct",
	I1104 11:34:33.665960   56868 command_runner.go:130] > # 	"operations_total",
	I1104 11:34:33.665970   56868 command_runner.go:130] > # 	"operations_latency_seconds",
	I1104 11:34:33.665981   56868 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1104 11:34:33.665988   56868 command_runner.go:130] > # 	"operations_errors_total",
	I1104 11:34:33.665999   56868 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1104 11:34:33.666010   56868 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1104 11:34:33.666020   56868 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1104 11:34:33.666029   56868 command_runner.go:130] > # 	"image_pulls_success_total",
	I1104 11:34:33.666041   56868 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1104 11:34:33.666049   56868 command_runner.go:130] > # 	"containers_oom_count_total",
	I1104 11:34:33.666054   56868 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1104 11:34:33.666063   56868 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1104 11:34:33.666072   56868 command_runner.go:130] > # ]
	I1104 11:34:33.666083   56868 command_runner.go:130] > # The port on which the metrics server will listen.
	I1104 11:34:33.666093   56868 command_runner.go:130] > # metrics_port = 9090
	I1104 11:34:33.666103   56868 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1104 11:34:33.666111   56868 command_runner.go:130] > # metrics_socket = ""
	I1104 11:34:33.666122   56868 command_runner.go:130] > # The certificate for the secure metrics server.
	I1104 11:34:33.666134   56868 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1104 11:34:33.666142   56868 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1104 11:34:33.666152   56868 command_runner.go:130] > # certificate on any modification event.
	I1104 11:34:33.666162   56868 command_runner.go:130] > # metrics_cert = ""
	I1104 11:34:33.666171   56868 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1104 11:34:33.666186   56868 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1104 11:34:33.666195   56868 command_runner.go:130] > # metrics_key = ""
	I1104 11:34:33.666208   56868 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1104 11:34:33.666216   56868 command_runner.go:130] > [crio.tracing]
	I1104 11:34:33.666227   56868 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1104 11:34:33.666234   56868 command_runner.go:130] > # enable_tracing = false
	I1104 11:34:33.666241   56868 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1104 11:34:33.666251   56868 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1104 11:34:33.666264   56868 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1104 11:34:33.666275   56868 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1104 11:34:33.666285   56868 command_runner.go:130] > # CRI-O NRI configuration.
	I1104 11:34:33.666293   56868 command_runner.go:130] > [crio.nri]
	I1104 11:34:33.666303   56868 command_runner.go:130] > # Globally enable or disable NRI.
	I1104 11:34:33.666312   56868 command_runner.go:130] > # enable_nri = false
	I1104 11:34:33.666323   56868 command_runner.go:130] > # NRI socket to listen on.
	I1104 11:34:33.666331   56868 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1104 11:34:33.666338   56868 command_runner.go:130] > # NRI plugin directory to use.
	I1104 11:34:33.666345   56868 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1104 11:34:33.666356   56868 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1104 11:34:33.666368   56868 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1104 11:34:33.666380   56868 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1104 11:34:33.666390   56868 command_runner.go:130] > # nri_disable_connections = false
	I1104 11:34:33.666402   56868 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1104 11:34:33.666412   56868 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1104 11:34:33.666421   56868 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1104 11:34:33.666430   56868 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1104 11:34:33.666441   56868 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1104 11:34:33.666450   56868 command_runner.go:130] > [crio.stats]
	I1104 11:34:33.666463   56868 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1104 11:34:33.666476   56868 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1104 11:34:33.666485   56868 command_runner.go:130] > # stats_collection_period = 0
	I1104 11:34:33.666560   56868 cni.go:84] Creating CNI manager for ""
	I1104 11:34:33.666572   56868 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1104 11:34:33.666586   56868 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:34:33.666617   56868 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-453447 NodeName:multinode-453447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:34:33.666758   56868 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-453447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.86"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:34:33.666828   56868 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:34:33.680200   56868 command_runner.go:130] > kubeadm
	I1104 11:34:33.680218   56868 command_runner.go:130] > kubectl
	I1104 11:34:33.680222   56868 command_runner.go:130] > kubelet
	I1104 11:34:33.680238   56868 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:34:33.680284   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 11:34:33.696236   56868 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1104 11:34:33.712481   56868 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:34:33.732373   56868 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1104 11:34:33.755587   56868 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I1104 11:34:33.759571   56868 command_runner.go:130] > 192.168.39.86	control-plane.minikube.internal
	I1104 11:34:33.759698   56868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:34:33.902732   56868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:34:33.917635   56868 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447 for IP: 192.168.39.86
	I1104 11:34:33.917661   56868 certs.go:194] generating shared ca certs ...
	I1104 11:34:33.917677   56868 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:34:33.917824   56868 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:34:33.917861   56868 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:34:33.917870   56868 certs.go:256] generating profile certs ...
	I1104 11:34:33.917946   56868 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/client.key
	I1104 11:34:33.918000   56868 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.key.a4bcad16
	I1104 11:34:33.918035   56868 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.key
	I1104 11:34:33.918049   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:34:33.918064   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:34:33.918078   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:34:33.918091   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:34:33.918102   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:34:33.918116   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:34:33.918129   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:34:33.918150   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:34:33.918214   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:34:33.918244   56868 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:34:33.918254   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:34:33.918276   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:34:33.918299   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:34:33.918319   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:34:33.918362   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:34:33.918390   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:33.918404   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:34:33.918416   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:34:33.918995   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:34:33.942790   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:34:33.965424   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:34:33.987656   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:34:34.009514   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 11:34:34.031621   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 11:34:34.053136   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:34:34.075155   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 11:34:34.097203   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:34:34.119468   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:34:34.141351   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:34:34.163697   56868 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:34:34.178861   56868 ssh_runner.go:195] Run: openssl version
	I1104 11:34:34.184143   56868 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1104 11:34:34.184216   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:34:34.194005   56868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.197991   56868 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.198017   56868 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.198065   56868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.203174   56868 command_runner.go:130] > 51391683
	I1104 11:34:34.203349   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:34:34.212228   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:34:34.222142   56868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.226314   56868 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.226397   56868 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.226450   56868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.231493   56868 command_runner.go:130] > 3ec20f2e
	I1104 11:34:34.231675   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:34:34.240214   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:34:34.249829   56868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.253918   56868 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.253966   56868 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.254014   56868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.259194   56868 command_runner.go:130] > b5213941
	I1104 11:34:34.259256   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:34:34.268333   56868 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:34:34.272466   56868 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:34:34.272484   56868 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1104 11:34:34.272490   56868 command_runner.go:130] > Device: 253,1	Inode: 2103342     Links: 1
	I1104 11:34:34.272496   56868 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1104 11:34:34.272501   56868 command_runner.go:130] > Access: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272507   56868 command_runner.go:130] > Modify: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272511   56868 command_runner.go:130] > Change: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272518   56868 command_runner.go:130] >  Birth: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272594   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:34:34.277815   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.277879   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:34:34.283397   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.283459   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:34:34.288654   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.288724   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:34:34.294062   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.294129   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:34:34.299393   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.299448   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:34:34.304602   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.304687   56868 kubeadm.go:392] StartCluster: {Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:34:34.304840   56868 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:34:34.304889   56868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:34:34.344099   56868 command_runner.go:130] > 8a71222d5e581bb2b8728a7bcc9b4092a22c6c7ddc2c7d5023ade850762313f2
	I1104 11:34:34.344126   56868 command_runner.go:130] > 13caef919da51a59ffeaf36c6198834dad2f51b54e8563121eb2bc2af62c9cba
	I1104 11:34:34.344132   56868 command_runner.go:130] > de832022d6a38f88d7fd4047e8e958bf8e29b8fd978f142700057256faff3dec
	I1104 11:34:34.344163   56868 command_runner.go:130] > 99187137d9b7622002f0c0edfa61cc3a605bd192056d9dd41b76baa15a798bc8
	I1104 11:34:34.344171   56868 command_runner.go:130] > 35ccc2e48ce6474be6e4ee62791f070236db849079eceb9d822817335ef62ca2
	I1104 11:34:34.344177   56868 command_runner.go:130] > 055d0d197ecfb4073e33727aa7d16bd21fa1bdb545dbc98889bdd63ac57785d6
	I1104 11:34:34.344185   56868 command_runner.go:130] > 6dc6ffa76cf341c78007aee47131c05761173bd60c8a2c834d2760ec4acf6c97
	I1104 11:34:34.344203   56868 command_runner.go:130] > 65c4627bd34af9f0ea03ad0892507644b87124b3e06845b239cfaa268faf1d21
	I1104 11:34:34.344231   56868 cri.go:89] found id: "8a71222d5e581bb2b8728a7bcc9b4092a22c6c7ddc2c7d5023ade850762313f2"
	I1104 11:34:34.344239   56868 cri.go:89] found id: "13caef919da51a59ffeaf36c6198834dad2f51b54e8563121eb2bc2af62c9cba"
	I1104 11:34:34.344243   56868 cri.go:89] found id: "de832022d6a38f88d7fd4047e8e958bf8e29b8fd978f142700057256faff3dec"
	I1104 11:34:34.344247   56868 cri.go:89] found id: "99187137d9b7622002f0c0edfa61cc3a605bd192056d9dd41b76baa15a798bc8"
	I1104 11:34:34.344250   56868 cri.go:89] found id: "35ccc2e48ce6474be6e4ee62791f070236db849079eceb9d822817335ef62ca2"
	I1104 11:34:34.344256   56868 cri.go:89] found id: "055d0d197ecfb4073e33727aa7d16bd21fa1bdb545dbc98889bdd63ac57785d6"
	I1104 11:34:34.344259   56868 cri.go:89] found id: "6dc6ffa76cf341c78007aee47131c05761173bd60c8a2c834d2760ec4acf6c97"
	I1104 11:34:34.344264   56868 cri.go:89] found id: "65c4627bd34af9f0ea03ad0892507644b87124b3e06845b239cfaa268faf1d21"
	I1104 11:34:34.344266   56868 cri.go:89] found id: ""
	I1104 11:34:34.344304   56868 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-453447 -n multinode-453447
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-453447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (318.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 stop
E1104 11:36:33.168726   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453447 stop: exit status 82 (2m0.465956296s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-453447-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-453447 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-453447 status: (18.648472609s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr: (3.360337303s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-453447 -n multinode-453447
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-453447 logs -n 25: (1.871895339s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447:/home/docker/cp-test_multinode-453447-m02_multinode-453447.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447 sudo cat                                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m02_multinode-453447.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03:/home/docker/cp-test_multinode-453447-m02_multinode-453447-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447-m03 sudo cat                                   | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m02_multinode-453447-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp testdata/cp-test.txt                                                | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1426244323/001/cp-test_multinode-453447-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447:/home/docker/cp-test_multinode-453447-m03_multinode-453447.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447 sudo cat                                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m03_multinode-453447.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt                       | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02:/home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447-m02 sudo cat                                   | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-453447 node stop m03                                                          | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	| node    | multinode-453447 node start                                                             | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-453447                                                                | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC |                     |
	| stop    | -p multinode-453447                                                                     | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC |                     |
	| start   | -p multinode-453447                                                                     | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:33 UTC | 04 Nov 24 11:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-453447                                                                | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:36 UTC |                     |
	| node    | multinode-453447 node delete                                                            | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:36 UTC | 04 Nov 24 11:36 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-453447 stop                                                                   | multinode-453447 | jenkins | v1.34.0 | 04 Nov 24 11:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:33:00
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:33:00.307427   56868 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:33:00.307528   56868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:33:00.307534   56868 out.go:358] Setting ErrFile to fd 2...
	I1104 11:33:00.307538   56868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:33:00.307743   56868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:33:00.308288   56868 out.go:352] Setting JSON to false
	I1104 11:33:00.309209   56868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8131,"bootTime":1730711849,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:33:00.309338   56868 start.go:139] virtualization: kvm guest
	I1104 11:33:00.311859   56868 out.go:177] * [multinode-453447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:33:00.313305   56868 notify.go:220] Checking for updates...
	I1104 11:33:00.313344   56868 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:33:00.314743   56868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:33:00.316041   56868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:33:00.317218   56868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:33:00.318346   56868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:33:00.319939   56868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:33:00.321677   56868 config.go:182] Loaded profile config "multinode-453447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:33:00.321803   56868 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:33:00.322287   56868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:33:00.322361   56868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:33:00.338441   56868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I1104 11:33:00.338942   56868 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:33:00.339542   56868 main.go:141] libmachine: Using API Version  1
	I1104 11:33:00.339574   56868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:33:00.339932   56868 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:33:00.340141   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:33:00.379904   56868 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:33:00.381370   56868 start.go:297] selected driver: kvm2
	I1104 11:33:00.381389   56868 start.go:901] validating driver "kvm2" against &{Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:33:00.381532   56868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:33:00.381936   56868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:33:00.382042   56868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:33:00.397318   56868 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:33:00.398050   56868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:33:00.398083   56868 cni.go:84] Creating CNI manager for ""
	I1104 11:33:00.398122   56868 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1104 11:33:00.398180   56868 start.go:340] cluster config:
	{Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-453447 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:33:00.398331   56868 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:33:00.400155   56868 out.go:177] * Starting "multinode-453447" primary control-plane node in "multinode-453447" cluster
	I1104 11:33:00.401322   56868 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:33:00.401362   56868 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:33:00.401369   56868 cache.go:56] Caching tarball of preloaded images
	I1104 11:33:00.401472   56868 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:33:00.401486   56868 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:33:00.401593   56868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/config.json ...
	I1104 11:33:00.401785   56868 start.go:360] acquireMachinesLock for multinode-453447: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:33:00.401838   56868 start.go:364] duration metric: took 32.474µs to acquireMachinesLock for "multinode-453447"
	I1104 11:33:00.401856   56868 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:33:00.401863   56868 fix.go:54] fixHost starting: 
	I1104 11:33:00.402162   56868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:33:00.402199   56868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:33:00.416872   56868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I1104 11:33:00.417452   56868 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:33:00.418001   56868 main.go:141] libmachine: Using API Version  1
	I1104 11:33:00.418034   56868 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:33:00.418398   56868 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:33:00.418622   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:33:00.418776   56868 main.go:141] libmachine: (multinode-453447) Calling .GetState
	I1104 11:33:00.420363   56868 fix.go:112] recreateIfNeeded on multinode-453447: state=Running err=<nil>
	W1104 11:33:00.420382   56868 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:33:00.422278   56868 out.go:177] * Updating the running kvm2 "multinode-453447" VM ...
	I1104 11:33:00.423506   56868 machine.go:93] provisionDockerMachine start ...
	I1104 11:33:00.423529   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:33:00.423720   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.426436   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.426902   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.426934   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.427109   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.427302   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.427477   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.427629   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.427811   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:00.428041   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:00.428059   56868 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:33:00.530883   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453447
	
	I1104 11:33:00.530911   56868 main.go:141] libmachine: (multinode-453447) Calling .GetMachineName
	I1104 11:33:00.531153   56868 buildroot.go:166] provisioning hostname "multinode-453447"
	I1104 11:33:00.531185   56868 main.go:141] libmachine: (multinode-453447) Calling .GetMachineName
	I1104 11:33:00.531372   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.534263   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.534637   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.534670   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.534854   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.535006   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.535110   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.535182   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.535301   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:00.535455   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:00.535467   56868 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-453447 && echo "multinode-453447" | sudo tee /etc/hostname
	I1104 11:33:00.651985   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453447
	
	I1104 11:33:00.652012   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.655414   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.655834   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.655868   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.656075   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.656314   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.656543   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.656697   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.656893   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:00.657088   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:00.657105   56868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-453447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-453447/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-453447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:33:00.757796   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:33:00.757821   56868 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:33:00.757874   56868 buildroot.go:174] setting up certificates
	I1104 11:33:00.757885   56868 provision.go:84] configureAuth start
	I1104 11:33:00.757901   56868 main.go:141] libmachine: (multinode-453447) Calling .GetMachineName
	I1104 11:33:00.758214   56868 main.go:141] libmachine: (multinode-453447) Calling .GetIP
	I1104 11:33:00.760987   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.761391   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.761420   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.761598   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.763710   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.764083   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.764110   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.764264   56868 provision.go:143] copyHostCerts
	I1104 11:33:00.764305   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:33:00.764348   56868 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:33:00.764361   56868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:33:00.764439   56868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:33:00.764538   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:33:00.764563   56868 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:33:00.764573   56868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:33:00.764612   56868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:33:00.764674   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:33:00.764701   56868 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:33:00.764710   56868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:33:00.764741   56868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:33:00.764804   56868 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.multinode-453447 san=[127.0.0.1 192.168.39.86 localhost minikube multinode-453447]
	I1104 11:33:00.840596   56868 provision.go:177] copyRemoteCerts
	I1104 11:33:00.840650   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:33:00.840671   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:00.843196   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.843555   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:00.843577   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:00.843782   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:00.843946   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:00.844085   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:00.844197   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:33:00.924861   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1104 11:33:00.924955   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:33:00.948254   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1104 11:33:00.948344   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1104 11:33:00.972464   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1104 11:33:00.972537   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 11:33:00.996976   56868 provision.go:87] duration metric: took 239.073655ms to configureAuth
	I1104 11:33:00.997001   56868 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:33:00.997216   56868 config.go:182] Loaded profile config "multinode-453447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:33:00.997307   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:33:01.000005   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:01.000377   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:33:01.000415   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:33:01.000631   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:33:01.000827   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:01.000978   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:33:01.001121   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:33:01.001336   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:33:01.001495   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:33:01.001509   56868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:34:31.755042   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:34:31.755068   56868 machine.go:96] duration metric: took 1m31.331545824s to provisionDockerMachine
	I1104 11:34:31.755083   56868 start.go:293] postStartSetup for "multinode-453447" (driver="kvm2")
	I1104 11:34:31.755096   56868 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:34:31.755118   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.755449   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:34:31.755483   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.759004   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.759398   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.759429   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.759633   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.759820   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.759987   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.760099   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:34:31.839452   56868 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:34:31.843628   56868 command_runner.go:130] > NAME=Buildroot
	I1104 11:34:31.843650   56868 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1104 11:34:31.843654   56868 command_runner.go:130] > ID=buildroot
	I1104 11:34:31.843659   56868 command_runner.go:130] > VERSION_ID=2023.02.9
	I1104 11:34:31.843664   56868 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1104 11:34:31.843694   56868 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:34:31.843707   56868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:34:31.843778   56868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:34:31.843887   56868 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:34:31.843902   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /etc/ssl/certs/272182.pem
	I1104 11:34:31.844014   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:34:31.853151   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:34:31.875996   56868 start.go:296] duration metric: took 120.898692ms for postStartSetup
	I1104 11:34:31.876036   56868 fix.go:56] duration metric: took 1m31.47417229s for fixHost
	I1104 11:34:31.876055   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.878925   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.879238   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.879269   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.879427   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.879637   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.879812   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.879919   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.880053   56868 main.go:141] libmachine: Using SSH client type: native
	I1104 11:34:31.880205   56868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1104 11:34:31.880215   56868 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:34:31.981647   56868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730720071.955285285
	
	I1104 11:34:31.981671   56868 fix.go:216] guest clock: 1730720071.955285285
	I1104 11:34:31.981681   56868 fix.go:229] Guest: 2024-11-04 11:34:31.955285285 +0000 UTC Remote: 2024-11-04 11:34:31.876039456 +0000 UTC m=+91.609243146 (delta=79.245829ms)
	I1104 11:34:31.981703   56868 fix.go:200] guest clock delta is within tolerance: 79.245829ms
	I1104 11:34:31.981709   56868 start.go:83] releasing machines lock for "multinode-453447", held for 1m31.579859716s
	I1104 11:34:31.981734   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.981987   56868 main.go:141] libmachine: (multinode-453447) Calling .GetIP
	I1104 11:34:31.984410   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.984764   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.984792   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.984906   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.985474   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.985644   56868 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:34:31.985729   56868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:34:31.985783   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.985878   56868 ssh_runner.go:195] Run: cat /version.json
	I1104 11:34:31.985903   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:34:31.988265   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988292   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988666   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.988692   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988790   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.988812   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:31.988838   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:31.988930   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.988975   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:34:31.989062   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.989132   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:34:31.989239   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:34:31.989272   56868 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:34:31.989393   56868 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:34:32.062263   56868 command_runner.go:130] > {"iso_version": "v1.34.0-1730282777-19883", "kicbase_version": "v0.0.45-1730110049-19872", "minikube_version": "v1.34.0", "commit": "7738213fbe7cb3f4867f3e3b534798700ea0e3fb"}
	I1104 11:34:32.062508   56868 ssh_runner.go:195] Run: systemctl --version
	I1104 11:34:32.084680   56868 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1104 11:34:32.084739   56868 command_runner.go:130] > systemd 252 (252)
	I1104 11:34:32.084783   56868 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1104 11:34:32.084852   56868 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:34:32.238325   56868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1104 11:34:32.243709   56868 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1104 11:34:32.243988   56868 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:34:32.244049   56868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:34:32.253349   56868 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1104 11:34:32.253374   56868 start.go:495] detecting cgroup driver to use...
	I1104 11:34:32.253467   56868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:34:32.270096   56868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:34:32.283939   56868 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:34:32.284006   56868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:34:32.297777   56868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:34:32.311091   56868 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:34:32.452906   56868 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:34:32.593303   56868 docker.go:233] disabling docker service ...
	I1104 11:34:32.593372   56868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:34:32.609437   56868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:34:32.623148   56868 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:34:32.759863   56868 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:34:32.894902   56868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:34:32.909320   56868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:34:32.927969   56868 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1104 11:34:32.928317   56868 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:34:32.928384   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.938338   56868 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:34:32.938402   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.948054   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.958306   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.967911   56868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:34:32.977707   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.987273   56868 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:32.997778   56868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:34:33.007619   56868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:34:33.016324   56868 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1104 11:34:33.016522   56868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:34:33.025202   56868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:34:33.153694   56868 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:34:33.341608   56868 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:34:33.341673   56868 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:34:33.346284   56868 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1104 11:34:33.346310   56868 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1104 11:34:33.346317   56868 command_runner.go:130] > Device: 0,22	Inode: 1299        Links: 1
	I1104 11:34:33.346324   56868 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1104 11:34:33.346331   56868 command_runner.go:130] > Access: 2024-11-04 11:34:33.313126209 +0000
	I1104 11:34:33.346339   56868 command_runner.go:130] > Modify: 2024-11-04 11:34:33.212126632 +0000
	I1104 11:34:33.346347   56868 command_runner.go:130] > Change: 2024-11-04 11:34:33.212126632 +0000
	I1104 11:34:33.346367   56868 command_runner.go:130] >  Birth: -
	I1104 11:34:33.346396   56868 start.go:563] Will wait 60s for crictl version
	I1104 11:34:33.346457   56868 ssh_runner.go:195] Run: which crictl
	I1104 11:34:33.349793   56868 command_runner.go:130] > /usr/bin/crictl
	I1104 11:34:33.349845   56868 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:34:33.384071   56868 command_runner.go:130] > Version:  0.1.0
	I1104 11:34:33.384093   56868 command_runner.go:130] > RuntimeName:  cri-o
	I1104 11:34:33.384097   56868 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1104 11:34:33.384102   56868 command_runner.go:130] > RuntimeApiVersion:  v1
	I1104 11:34:33.385213   56868 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:34:33.385308   56868 ssh_runner.go:195] Run: crio --version
	I1104 11:34:33.415588   56868 command_runner.go:130] > crio version 1.29.1
	I1104 11:34:33.415609   56868 command_runner.go:130] > Version:        1.29.1
	I1104 11:34:33.415615   56868 command_runner.go:130] > GitCommit:      unknown
	I1104 11:34:33.415619   56868 command_runner.go:130] > GitCommitDate:  unknown
	I1104 11:34:33.415623   56868 command_runner.go:130] > GitTreeState:   clean
	I1104 11:34:33.415644   56868 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1104 11:34:33.415649   56868 command_runner.go:130] > GoVersion:      go1.21.6
	I1104 11:34:33.415653   56868 command_runner.go:130] > Compiler:       gc
	I1104 11:34:33.415657   56868 command_runner.go:130] > Platform:       linux/amd64
	I1104 11:34:33.415660   56868 command_runner.go:130] > Linkmode:       dynamic
	I1104 11:34:33.415664   56868 command_runner.go:130] > BuildTags:      
	I1104 11:34:33.415668   56868 command_runner.go:130] >   containers_image_ostree_stub
	I1104 11:34:33.415673   56868 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1104 11:34:33.415686   56868 command_runner.go:130] >   btrfs_noversion
	I1104 11:34:33.415696   56868 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1104 11:34:33.415703   56868 command_runner.go:130] >   libdm_no_deferred_remove
	I1104 11:34:33.415707   56868 command_runner.go:130] >   seccomp
	I1104 11:34:33.415712   56868 command_runner.go:130] > LDFlags:          unknown
	I1104 11:34:33.415719   56868 command_runner.go:130] > SeccompEnabled:   true
	I1104 11:34:33.415725   56868 command_runner.go:130] > AppArmorEnabled:  false
	I1104 11:34:33.416865   56868 ssh_runner.go:195] Run: crio --version
	I1104 11:34:33.447718   56868 command_runner.go:130] > crio version 1.29.1
	I1104 11:34:33.447741   56868 command_runner.go:130] > Version:        1.29.1
	I1104 11:34:33.447747   56868 command_runner.go:130] > GitCommit:      unknown
	I1104 11:34:33.447751   56868 command_runner.go:130] > GitCommitDate:  unknown
	I1104 11:34:33.447755   56868 command_runner.go:130] > GitTreeState:   clean
	I1104 11:34:33.447761   56868 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1104 11:34:33.447765   56868 command_runner.go:130] > GoVersion:      go1.21.6
	I1104 11:34:33.447768   56868 command_runner.go:130] > Compiler:       gc
	I1104 11:34:33.447773   56868 command_runner.go:130] > Platform:       linux/amd64
	I1104 11:34:33.447777   56868 command_runner.go:130] > Linkmode:       dynamic
	I1104 11:34:33.447782   56868 command_runner.go:130] > BuildTags:      
	I1104 11:34:33.447789   56868 command_runner.go:130] >   containers_image_ostree_stub
	I1104 11:34:33.447798   56868 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1104 11:34:33.447806   56868 command_runner.go:130] >   btrfs_noversion
	I1104 11:34:33.447814   56868 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1104 11:34:33.447824   56868 command_runner.go:130] >   libdm_no_deferred_remove
	I1104 11:34:33.447830   56868 command_runner.go:130] >   seccomp
	I1104 11:34:33.447834   56868 command_runner.go:130] > LDFlags:          unknown
	I1104 11:34:33.447838   56868 command_runner.go:130] > SeccompEnabled:   true
	I1104 11:34:33.447843   56868 command_runner.go:130] > AppArmorEnabled:  false
	I1104 11:34:33.449919   56868 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:34:33.451027   56868 main.go:141] libmachine: (multinode-453447) Calling .GetIP
	I1104 11:34:33.453521   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:33.453903   56868 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:34:33.453937   56868 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:34:33.454165   56868 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:34:33.458138   56868 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1104 11:34:33.458235   56868 kubeadm.go:883] updating cluster {Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:34:33.458395   56868 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:34:33.458453   56868 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:34:33.498558   56868 command_runner.go:130] > {
	I1104 11:34:33.498582   56868 command_runner.go:130] >   "images": [
	I1104 11:34:33.498587   56868 command_runner.go:130] >     {
	I1104 11:34:33.498594   56868 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1104 11:34:33.498598   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498603   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1104 11:34:33.498607   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498610   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498624   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1104 11:34:33.498635   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1104 11:34:33.498644   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498651   56868 command_runner.go:130] >       "size": "94965812",
	I1104 11:34:33.498658   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.498664   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.498671   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.498678   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.498682   56868 command_runner.go:130] >     },
	I1104 11:34:33.498684   56868 command_runner.go:130] >     {
	I1104 11:34:33.498690   56868 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1104 11:34:33.498697   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498702   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1104 11:34:33.498705   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498710   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498722   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1104 11:34:33.498738   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1104 11:34:33.498746   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498755   56868 command_runner.go:130] >       "size": "94958644",
	I1104 11:34:33.498764   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.498773   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.498779   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.498782   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.498786   56868 command_runner.go:130] >     },
	I1104 11:34:33.498791   56868 command_runner.go:130] >     {
	I1104 11:34:33.498797   56868 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1104 11:34:33.498801   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498809   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1104 11:34:33.498818   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498827   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498842   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1104 11:34:33.498857   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1104 11:34:33.498865   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498874   56868 command_runner.go:130] >       "size": "1363676",
	I1104 11:34:33.498882   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.498886   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.498891   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.498895   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.498902   56868 command_runner.go:130] >     },
	I1104 11:34:33.498907   56868 command_runner.go:130] >     {
	I1104 11:34:33.498920   56868 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1104 11:34:33.498931   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.498942   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1104 11:34:33.498951   56868 command_runner.go:130] >       ],
	I1104 11:34:33.498960   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.498976   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1104 11:34:33.498991   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1104 11:34:33.499000   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499010   56868 command_runner.go:130] >       "size": "31470524",
	I1104 11:34:33.499020   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.499030   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499040   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499050   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499058   56868 command_runner.go:130] >     },
	I1104 11:34:33.499066   56868 command_runner.go:130] >     {
	I1104 11:34:33.499074   56868 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1104 11:34:33.499080   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499088   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1104 11:34:33.499096   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499103   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499118   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1104 11:34:33.499134   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1104 11:34:33.499142   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499152   56868 command_runner.go:130] >       "size": "63273227",
	I1104 11:34:33.499161   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.499169   56868 command_runner.go:130] >       "username": "nonroot",
	I1104 11:34:33.499178   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499198   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499203   56868 command_runner.go:130] >     },
	I1104 11:34:33.499212   56868 command_runner.go:130] >     {
	I1104 11:34:33.499224   56868 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1104 11:34:33.499233   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499243   56868 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1104 11:34:33.499252   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499265   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499275   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1104 11:34:33.499290   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1104 11:34:33.499299   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499307   56868 command_runner.go:130] >       "size": "149009664",
	I1104 11:34:33.499317   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499327   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499336   56868 command_runner.go:130] >       },
	I1104 11:34:33.499345   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499355   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499365   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499372   56868 command_runner.go:130] >     },
	I1104 11:34:33.499375   56868 command_runner.go:130] >     {
	I1104 11:34:33.499388   56868 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1104 11:34:33.499398   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499409   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1104 11:34:33.499419   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499428   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499443   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1104 11:34:33.499460   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1104 11:34:33.499467   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499472   56868 command_runner.go:130] >       "size": "95274464",
	I1104 11:34:33.499479   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499488   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499497   56868 command_runner.go:130] >       },
	I1104 11:34:33.499504   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499514   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499523   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499532   56868 command_runner.go:130] >     },
	I1104 11:34:33.499538   56868 command_runner.go:130] >     {
	I1104 11:34:33.499551   56868 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1104 11:34:33.499561   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499569   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1104 11:34:33.499576   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499583   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499608   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1104 11:34:33.499623   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1104 11:34:33.499629   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499639   56868 command_runner.go:130] >       "size": "89474374",
	I1104 11:34:33.499647   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499656   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499663   56868 command_runner.go:130] >       },
	I1104 11:34:33.499668   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499674   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499680   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499686   56868 command_runner.go:130] >     },
	I1104 11:34:33.499692   56868 command_runner.go:130] >     {
	I1104 11:34:33.499703   56868 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1104 11:34:33.499710   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499717   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1104 11:34:33.499726   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499736   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499749   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1104 11:34:33.499760   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1104 11:34:33.499769   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499779   56868 command_runner.go:130] >       "size": "92783513",
	I1104 11:34:33.499786   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.499796   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499805   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499814   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499823   56868 command_runner.go:130] >     },
	I1104 11:34:33.499831   56868 command_runner.go:130] >     {
	I1104 11:34:33.499841   56868 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1104 11:34:33.499849   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.499854   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1104 11:34:33.499861   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499869   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.499884   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1104 11:34:33.499899   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1104 11:34:33.499908   56868 command_runner.go:130] >       ],
	I1104 11:34:33.499918   56868 command_runner.go:130] >       "size": "68457798",
	I1104 11:34:33.499928   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.499938   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.499945   56868 command_runner.go:130] >       },
	I1104 11:34:33.499949   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.499955   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.499964   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.499972   56868 command_runner.go:130] >     },
	I1104 11:34:33.499978   56868 command_runner.go:130] >     {
	I1104 11:34:33.499991   56868 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1104 11:34:33.500001   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.500011   56868 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1104 11:34:33.500020   56868 command_runner.go:130] >       ],
	I1104 11:34:33.500030   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.500041   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1104 11:34:33.500054   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1104 11:34:33.500063   56868 command_runner.go:130] >       ],
	I1104 11:34:33.500070   56868 command_runner.go:130] >       "size": "742080",
	I1104 11:34:33.500079   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.500088   56868 command_runner.go:130] >         "value": "65535"
	I1104 11:34:33.500096   56868 command_runner.go:130] >       },
	I1104 11:34:33.500105   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.500113   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.500123   56868 command_runner.go:130] >       "pinned": true
	I1104 11:34:33.500128   56868 command_runner.go:130] >     }
	I1104 11:34:33.500135   56868 command_runner.go:130] >   ]
	I1104 11:34:33.500138   56868 command_runner.go:130] > }
	I1104 11:34:33.500315   56868 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:34:33.500326   56868 crio.go:433] Images already preloaded, skipping extraction
	I1104 11:34:33.500379   56868 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:34:33.532356   56868 command_runner.go:130] > {
	I1104 11:34:33.532381   56868 command_runner.go:130] >   "images": [
	I1104 11:34:33.532387   56868 command_runner.go:130] >     {
	I1104 11:34:33.532397   56868 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1104 11:34:33.532403   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532412   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1104 11:34:33.532416   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532421   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532432   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1104 11:34:33.532445   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1104 11:34:33.532450   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532457   56868 command_runner.go:130] >       "size": "94965812",
	I1104 11:34:33.532463   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532470   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532483   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532491   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532497   56868 command_runner.go:130] >     },
	I1104 11:34:33.532503   56868 command_runner.go:130] >     {
	I1104 11:34:33.532514   56868 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1104 11:34:33.532523   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532532   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1104 11:34:33.532546   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532552   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532566   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1104 11:34:33.532581   56868 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1104 11:34:33.532590   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532597   56868 command_runner.go:130] >       "size": "94958644",
	I1104 11:34:33.532604   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532616   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532625   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532632   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532638   56868 command_runner.go:130] >     },
	I1104 11:34:33.532643   56868 command_runner.go:130] >     {
	I1104 11:34:33.532654   56868 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1104 11:34:33.532663   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532672   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1104 11:34:33.532680   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532688   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532703   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1104 11:34:33.532718   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1104 11:34:33.532727   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532735   56868 command_runner.go:130] >       "size": "1363676",
	I1104 11:34:33.532745   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532755   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532774   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532784   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532789   56868 command_runner.go:130] >     },
	I1104 11:34:33.532794   56868 command_runner.go:130] >     {
	I1104 11:34:33.532804   56868 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1104 11:34:33.532814   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532826   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1104 11:34:33.532834   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532843   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.532860   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1104 11:34:33.532880   56868 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1104 11:34:33.532888   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532895   56868 command_runner.go:130] >       "size": "31470524",
	I1104 11:34:33.532901   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.532910   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.532917   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.532927   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.532933   56868 command_runner.go:130] >     },
	I1104 11:34:33.532940   56868 command_runner.go:130] >     {
	I1104 11:34:33.532953   56868 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1104 11:34:33.532964   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.532975   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1104 11:34:33.532984   56868 command_runner.go:130] >       ],
	I1104 11:34:33.532991   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533008   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1104 11:34:33.533024   56868 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1104 11:34:33.533032   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533040   56868 command_runner.go:130] >       "size": "63273227",
	I1104 11:34:33.533049   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.533057   56868 command_runner.go:130] >       "username": "nonroot",
	I1104 11:34:33.533065   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533086   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533102   56868 command_runner.go:130] >     },
	I1104 11:34:33.533111   56868 command_runner.go:130] >     {
	I1104 11:34:33.533123   56868 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1104 11:34:33.533133   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533144   56868 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1104 11:34:33.533153   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533161   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533177   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1104 11:34:33.533197   56868 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1104 11:34:33.533205   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533213   56868 command_runner.go:130] >       "size": "149009664",
	I1104 11:34:33.533222   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533240   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533252   56868 command_runner.go:130] >       },
	I1104 11:34:33.533261   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533270   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533278   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533286   56868 command_runner.go:130] >     },
	I1104 11:34:33.533293   56868 command_runner.go:130] >     {
	I1104 11:34:33.533307   56868 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1104 11:34:33.533316   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533325   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1104 11:34:33.533333   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533341   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533356   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1104 11:34:33.533372   56868 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1104 11:34:33.533382   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533393   56868 command_runner.go:130] >       "size": "95274464",
	I1104 11:34:33.533402   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533410   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533418   56868 command_runner.go:130] >       },
	I1104 11:34:33.533426   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533437   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533445   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533453   56868 command_runner.go:130] >     },
	I1104 11:34:33.533459   56868 command_runner.go:130] >     {
	I1104 11:34:33.533473   56868 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1104 11:34:33.533483   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533494   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1104 11:34:33.533502   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533510   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533534   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1104 11:34:33.533550   56868 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1104 11:34:33.533558   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533566   56868 command_runner.go:130] >       "size": "89474374",
	I1104 11:34:33.533576   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533586   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533592   56868 command_runner.go:130] >       },
	I1104 11:34:33.533600   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533607   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533617   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533623   56868 command_runner.go:130] >     },
	I1104 11:34:33.533631   56868 command_runner.go:130] >     {
	I1104 11:34:33.533642   56868 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1104 11:34:33.533650   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533660   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1104 11:34:33.533669   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533677   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533693   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1104 11:34:33.533711   56868 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1104 11:34:33.533720   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533728   56868 command_runner.go:130] >       "size": "92783513",
	I1104 11:34:33.533737   56868 command_runner.go:130] >       "uid": null,
	I1104 11:34:33.533744   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533751   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533762   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533770   56868 command_runner.go:130] >     },
	I1104 11:34:33.533778   56868 command_runner.go:130] >     {
	I1104 11:34:33.533789   56868 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1104 11:34:33.533799   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533809   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1104 11:34:33.533817   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533825   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533840   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1104 11:34:33.533855   56868 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1104 11:34:33.533865   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533872   56868 command_runner.go:130] >       "size": "68457798",
	I1104 11:34:33.533881   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.533889   56868 command_runner.go:130] >         "value": "0"
	I1104 11:34:33.533897   56868 command_runner.go:130] >       },
	I1104 11:34:33.533903   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.533913   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.533920   56868 command_runner.go:130] >       "pinned": false
	I1104 11:34:33.533928   56868 command_runner.go:130] >     },
	I1104 11:34:33.533935   56868 command_runner.go:130] >     {
	I1104 11:34:33.533949   56868 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1104 11:34:33.533959   56868 command_runner.go:130] >       "repoTags": [
	I1104 11:34:33.533970   56868 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1104 11:34:33.533978   56868 command_runner.go:130] >       ],
	I1104 11:34:33.533984   56868 command_runner.go:130] >       "repoDigests": [
	I1104 11:34:33.533999   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1104 11:34:33.534017   56868 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1104 11:34:33.534024   56868 command_runner.go:130] >       ],
	I1104 11:34:33.534032   56868 command_runner.go:130] >       "size": "742080",
	I1104 11:34:33.534041   56868 command_runner.go:130] >       "uid": {
	I1104 11:34:33.534049   56868 command_runner.go:130] >         "value": "65535"
	I1104 11:34:33.534057   56868 command_runner.go:130] >       },
	I1104 11:34:33.534065   56868 command_runner.go:130] >       "username": "",
	I1104 11:34:33.534073   56868 command_runner.go:130] >       "spec": null,
	I1104 11:34:33.534084   56868 command_runner.go:130] >       "pinned": true
	I1104 11:34:33.534092   56868 command_runner.go:130] >     }
	I1104 11:34:33.534098   56868 command_runner.go:130] >   ]
	I1104 11:34:33.534107   56868 command_runner.go:130] > }
	I1104 11:34:33.534245   56868 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:34:33.534257   56868 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:34:33.534265   56868 kubeadm.go:934] updating node { 192.168.39.86 8443 v1.31.2 crio true true} ...
	I1104 11:34:33.534381   56868 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-453447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:34:33.534471   56868 ssh_runner.go:195] Run: crio config
	I1104 11:34:33.644071   56868 command_runner.go:130] ! time="2024-11-04 11:34:33.617697441Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1104 11:34:33.652825   56868 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1104 11:34:33.660534   56868 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1104 11:34:33.660555   56868 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1104 11:34:33.660561   56868 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1104 11:34:33.660565   56868 command_runner.go:130] > #
	I1104 11:34:33.660571   56868 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1104 11:34:33.660578   56868 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1104 11:34:33.660583   56868 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1104 11:34:33.660592   56868 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1104 11:34:33.660597   56868 command_runner.go:130] > # reload'.
	I1104 11:34:33.660606   56868 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1104 11:34:33.660615   56868 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1104 11:34:33.660629   56868 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1104 11:34:33.660638   56868 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1104 11:34:33.660642   56868 command_runner.go:130] > [crio]
	I1104 11:34:33.660648   56868 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1104 11:34:33.660656   56868 command_runner.go:130] > # containers images, in this directory.
	I1104 11:34:33.660660   56868 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1104 11:34:33.660671   56868 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1104 11:34:33.660676   56868 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1104 11:34:33.660683   56868 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1104 11:34:33.660688   56868 command_runner.go:130] > # imagestore = ""
	I1104 11:34:33.660703   56868 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1104 11:34:33.660717   56868 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1104 11:34:33.660725   56868 command_runner.go:130] > storage_driver = "overlay"
	I1104 11:34:33.660735   56868 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1104 11:34:33.660748   56868 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1104 11:34:33.660758   56868 command_runner.go:130] > storage_option = [
	I1104 11:34:33.660765   56868 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1104 11:34:33.660769   56868 command_runner.go:130] > ]
	I1104 11:34:33.660777   56868 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1104 11:34:33.660785   56868 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1104 11:34:33.660789   56868 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1104 11:34:33.660797   56868 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1104 11:34:33.660803   56868 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1104 11:34:33.660813   56868 command_runner.go:130] > # always happen on a node reboot
	I1104 11:34:33.660825   56868 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1104 11:34:33.660842   56868 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1104 11:34:33.660854   56868 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1104 11:34:33.660863   56868 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1104 11:34:33.660871   56868 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1104 11:34:33.660880   56868 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1104 11:34:33.660889   56868 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1104 11:34:33.660895   56868 command_runner.go:130] > # internal_wipe = true
	I1104 11:34:33.660906   56868 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1104 11:34:33.660919   56868 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1104 11:34:33.660928   56868 command_runner.go:130] > # internal_repair = false
	I1104 11:34:33.660940   56868 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1104 11:34:33.660952   56868 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1104 11:34:33.660963   56868 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1104 11:34:33.660971   56868 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1104 11:34:33.660979   56868 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1104 11:34:33.660986   56868 command_runner.go:130] > [crio.api]
	I1104 11:34:33.660995   56868 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1104 11:34:33.661005   56868 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1104 11:34:33.661019   56868 command_runner.go:130] > # IP address on which the stream server will listen.
	I1104 11:34:33.661029   56868 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1104 11:34:33.661042   56868 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1104 11:34:33.661053   56868 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1104 11:34:33.661063   56868 command_runner.go:130] > # stream_port = "0"
	I1104 11:34:33.661073   56868 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1104 11:34:33.661079   56868 command_runner.go:130] > # stream_enable_tls = false
	I1104 11:34:33.661088   56868 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1104 11:34:33.661097   56868 command_runner.go:130] > # stream_idle_timeout = ""
	I1104 11:34:33.661114   56868 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1104 11:34:33.661126   56868 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1104 11:34:33.661135   56868 command_runner.go:130] > # minutes.
	I1104 11:34:33.661143   56868 command_runner.go:130] > # stream_tls_cert = ""
	I1104 11:34:33.661155   56868 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1104 11:34:33.661165   56868 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1104 11:34:33.661173   56868 command_runner.go:130] > # stream_tls_key = ""
	I1104 11:34:33.661190   56868 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1104 11:34:33.661205   56868 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1104 11:34:33.661221   56868 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1104 11:34:33.661241   56868 command_runner.go:130] > # stream_tls_ca = ""
	I1104 11:34:33.661256   56868 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1104 11:34:33.661266   56868 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1104 11:34:33.661278   56868 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1104 11:34:33.661287   56868 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1104 11:34:33.661300   56868 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1104 11:34:33.661313   56868 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1104 11:34:33.661322   56868 command_runner.go:130] > [crio.runtime]
	I1104 11:34:33.661334   56868 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1104 11:34:33.661345   56868 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1104 11:34:33.661354   56868 command_runner.go:130] > # "nofile=1024:2048"
	I1104 11:34:33.661366   56868 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1104 11:34:33.661373   56868 command_runner.go:130] > # default_ulimits = [
	I1104 11:34:33.661378   56868 command_runner.go:130] > # ]
	I1104 11:34:33.661400   56868 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1104 11:34:33.661410   56868 command_runner.go:130] > # no_pivot = false
	I1104 11:34:33.661419   56868 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1104 11:34:33.661432   56868 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1104 11:34:33.661443   56868 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1104 11:34:33.661453   56868 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1104 11:34:33.661463   56868 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1104 11:34:33.661476   56868 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1104 11:34:33.661484   56868 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1104 11:34:33.661489   56868 command_runner.go:130] > # Cgroup setting for conmon
	I1104 11:34:33.661503   56868 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1104 11:34:33.661512   56868 command_runner.go:130] > conmon_cgroup = "pod"
	I1104 11:34:33.661522   56868 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1104 11:34:33.661534   56868 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1104 11:34:33.661551   56868 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1104 11:34:33.661559   56868 command_runner.go:130] > conmon_env = [
	I1104 11:34:33.661571   56868 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1104 11:34:33.661579   56868 command_runner.go:130] > ]
	I1104 11:34:33.661588   56868 command_runner.go:130] > # Additional environment variables to set for all the
	I1104 11:34:33.661597   56868 command_runner.go:130] > # containers. These are overridden if set in the
	I1104 11:34:33.661610   56868 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1104 11:34:33.661621   56868 command_runner.go:130] > # default_env = [
	I1104 11:34:33.661629   56868 command_runner.go:130] > # ]
	I1104 11:34:33.661644   56868 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1104 11:34:33.661658   56868 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1104 11:34:33.661667   56868 command_runner.go:130] > # selinux = false
	I1104 11:34:33.661678   56868 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1104 11:34:33.661687   56868 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1104 11:34:33.661700   56868 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1104 11:34:33.661709   56868 command_runner.go:130] > # seccomp_profile = ""
	I1104 11:34:33.661719   56868 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1104 11:34:33.661731   56868 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1104 11:34:33.661744   56868 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1104 11:34:33.661755   56868 command_runner.go:130] > # which might increase security.
	I1104 11:34:33.661765   56868 command_runner.go:130] > # This option is currently deprecated,
	I1104 11:34:33.661777   56868 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1104 11:34:33.661784   56868 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1104 11:34:33.661793   56868 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1104 11:34:33.661807   56868 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1104 11:34:33.661819   56868 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1104 11:34:33.661833   56868 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1104 11:34:33.661844   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.661854   56868 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1104 11:34:33.661865   56868 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1104 11:34:33.661871   56868 command_runner.go:130] > # the cgroup blockio controller.
	I1104 11:34:33.661878   56868 command_runner.go:130] > # blockio_config_file = ""
	I1104 11:34:33.661892   56868 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1104 11:34:33.661902   56868 command_runner.go:130] > # blockio parameters.
	I1104 11:34:33.661912   56868 command_runner.go:130] > # blockio_reload = false
	I1104 11:34:33.661925   56868 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1104 11:34:33.661935   56868 command_runner.go:130] > # irqbalance daemon.
	I1104 11:34:33.661946   56868 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1104 11:34:33.661958   56868 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1104 11:34:33.661971   56868 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1104 11:34:33.661985   56868 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1104 11:34:33.661999   56868 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1104 11:34:33.662012   56868 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1104 11:34:33.662023   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.662033   56868 command_runner.go:130] > # rdt_config_file = ""
	I1104 11:34:33.662044   56868 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1104 11:34:33.662052   56868 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1104 11:34:33.662074   56868 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1104 11:34:33.662085   56868 command_runner.go:130] > # separate_pull_cgroup = ""
	I1104 11:34:33.662095   56868 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1104 11:34:33.662108   56868 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1104 11:34:33.662117   56868 command_runner.go:130] > # will be added.
	I1104 11:34:33.662127   56868 command_runner.go:130] > # default_capabilities = [
	I1104 11:34:33.662135   56868 command_runner.go:130] > # 	"CHOWN",
	I1104 11:34:33.662143   56868 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1104 11:34:33.662151   56868 command_runner.go:130] > # 	"FSETID",
	I1104 11:34:33.662157   56868 command_runner.go:130] > # 	"FOWNER",
	I1104 11:34:33.662162   56868 command_runner.go:130] > # 	"SETGID",
	I1104 11:34:33.662171   56868 command_runner.go:130] > # 	"SETUID",
	I1104 11:34:33.662185   56868 command_runner.go:130] > # 	"SETPCAP",
	I1104 11:34:33.662195   56868 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1104 11:34:33.662204   56868 command_runner.go:130] > # 	"KILL",
	I1104 11:34:33.662213   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662227   56868 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1104 11:34:33.662241   56868 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1104 11:34:33.662250   56868 command_runner.go:130] > # add_inheritable_capabilities = false
	I1104 11:34:33.662261   56868 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1104 11:34:33.662274   56868 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1104 11:34:33.662284   56868 command_runner.go:130] > default_sysctls = [
	I1104 11:34:33.662292   56868 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1104 11:34:33.662301   56868 command_runner.go:130] > ]
	I1104 11:34:33.662311   56868 command_runner.go:130] > # List of devices on the host that a
	I1104 11:34:33.662324   56868 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1104 11:34:33.662333   56868 command_runner.go:130] > # allowed_devices = [
	I1104 11:34:33.662342   56868 command_runner.go:130] > # 	"/dev/fuse",
	I1104 11:34:33.662349   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662355   56868 command_runner.go:130] > # List of additional devices. specified as
	I1104 11:34:33.662367   56868 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1104 11:34:33.662379   56868 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1104 11:34:33.662395   56868 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1104 11:34:33.662405   56868 command_runner.go:130] > # additional_devices = [
	I1104 11:34:33.662413   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662424   56868 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1104 11:34:33.662433   56868 command_runner.go:130] > # cdi_spec_dirs = [
	I1104 11:34:33.662441   56868 command_runner.go:130] > # 	"/etc/cdi",
	I1104 11:34:33.662447   56868 command_runner.go:130] > # 	"/var/run/cdi",
	I1104 11:34:33.662455   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662468   56868 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1104 11:34:33.662482   56868 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1104 11:34:33.662491   56868 command_runner.go:130] > # Defaults to false.
	I1104 11:34:33.662503   56868 command_runner.go:130] > # device_ownership_from_security_context = false
	I1104 11:34:33.662515   56868 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1104 11:34:33.662528   56868 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1104 11:34:33.662535   56868 command_runner.go:130] > # hooks_dir = [
	I1104 11:34:33.662540   56868 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1104 11:34:33.662548   56868 command_runner.go:130] > # ]
	I1104 11:34:33.662561   56868 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1104 11:34:33.662574   56868 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1104 11:34:33.662585   56868 command_runner.go:130] > # its default mounts from the following two files:
	I1104 11:34:33.662592   56868 command_runner.go:130] > #
	I1104 11:34:33.662605   56868 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1104 11:34:33.662617   56868 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1104 11:34:33.662626   56868 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1104 11:34:33.662634   56868 command_runner.go:130] > #
	I1104 11:34:33.662643   56868 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1104 11:34:33.662656   56868 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1104 11:34:33.662670   56868 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1104 11:34:33.662682   56868 command_runner.go:130] > #      only add mounts it finds in this file.
	I1104 11:34:33.662690   56868 command_runner.go:130] > #
	I1104 11:34:33.662699   56868 command_runner.go:130] > # default_mounts_file = ""
	I1104 11:34:33.662710   56868 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1104 11:34:33.662723   56868 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1104 11:34:33.662729   56868 command_runner.go:130] > pids_limit = 1024
	I1104 11:34:33.662739   56868 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1104 11:34:33.662753   56868 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1104 11:34:33.662766   56868 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1104 11:34:33.662782   56868 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1104 11:34:33.662792   56868 command_runner.go:130] > # log_size_max = -1
	I1104 11:34:33.662806   56868 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1104 11:34:33.662817   56868 command_runner.go:130] > # log_to_journald = false
	I1104 11:34:33.662826   56868 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1104 11:34:33.662837   56868 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1104 11:34:33.662849   56868 command_runner.go:130] > # Path to directory for container attach sockets.
	I1104 11:34:33.662861   56868 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1104 11:34:33.662872   56868 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1104 11:34:33.662882   56868 command_runner.go:130] > # bind_mount_prefix = ""
	I1104 11:34:33.662894   56868 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1104 11:34:33.662903   56868 command_runner.go:130] > # read_only = false
	I1104 11:34:33.662913   56868 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1104 11:34:33.662922   56868 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1104 11:34:33.662931   56868 command_runner.go:130] > # live configuration reload.
	I1104 11:34:33.662940   56868 command_runner.go:130] > # log_level = "info"
	I1104 11:34:33.662950   56868 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1104 11:34:33.662962   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.662972   56868 command_runner.go:130] > # log_filter = ""
	I1104 11:34:33.662985   56868 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1104 11:34:33.662999   56868 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1104 11:34:33.663007   56868 command_runner.go:130] > # separated by comma.
	I1104 11:34:33.663018   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663027   56868 command_runner.go:130] > # uid_mappings = ""
	I1104 11:34:33.663040   56868 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1104 11:34:33.663053   56868 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1104 11:34:33.663063   56868 command_runner.go:130] > # separated by comma.
	I1104 11:34:33.663078   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663087   56868 command_runner.go:130] > # gid_mappings = ""
	I1104 11:34:33.663100   56868 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1104 11:34:33.663109   56868 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1104 11:34:33.663121   56868 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1104 11:34:33.663137   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663147   56868 command_runner.go:130] > # minimum_mappable_uid = -1
	I1104 11:34:33.663159   56868 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1104 11:34:33.663172   56868 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1104 11:34:33.663188   56868 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1104 11:34:33.663199   56868 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1104 11:34:33.663211   56868 command_runner.go:130] > # minimum_mappable_gid = -1
	I1104 11:34:33.663224   56868 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1104 11:34:33.663236   56868 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1104 11:34:33.663249   56868 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1104 11:34:33.663258   56868 command_runner.go:130] > # ctr_stop_timeout = 30
	I1104 11:34:33.663271   56868 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1104 11:34:33.663282   56868 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1104 11:34:33.663289   56868 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1104 11:34:33.663297   56868 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1104 11:34:33.663308   56868 command_runner.go:130] > drop_infra_ctr = false
	I1104 11:34:33.663321   56868 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1104 11:34:33.663332   56868 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1104 11:34:33.663346   56868 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1104 11:34:33.663356   56868 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1104 11:34:33.663369   56868 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1104 11:34:33.663377   56868 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1104 11:34:33.663389   56868 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1104 11:34:33.663401   56868 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1104 11:34:33.663411   56868 command_runner.go:130] > # shared_cpuset = ""
	I1104 11:34:33.663423   56868 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1104 11:34:33.663433   56868 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1104 11:34:33.663443   56868 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1104 11:34:33.663454   56868 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1104 11:34:33.663463   56868 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1104 11:34:33.663471   56868 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1104 11:34:33.663483   56868 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1104 11:34:33.663495   56868 command_runner.go:130] > # enable_criu_support = false
	I1104 11:34:33.663506   56868 command_runner.go:130] > # Enable/disable the generation of the container,
	I1104 11:34:33.663519   56868 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1104 11:34:33.663529   56868 command_runner.go:130] > # enable_pod_events = false
	I1104 11:34:33.663543   56868 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1104 11:34:33.663555   56868 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1104 11:34:33.663564   56868 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1104 11:34:33.663572   56868 command_runner.go:130] > # default_runtime = "runc"
	I1104 11:34:33.663583   56868 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1104 11:34:33.663598   56868 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1104 11:34:33.663616   56868 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1104 11:34:33.663631   56868 command_runner.go:130] > # creation as a file is not desired either.
	I1104 11:34:33.663647   56868 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1104 11:34:33.663657   56868 command_runner.go:130] > # the hostname is being managed dynamically.
	I1104 11:34:33.663664   56868 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1104 11:34:33.663668   56868 command_runner.go:130] > # ]
	I1104 11:34:33.663681   56868 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1104 11:34:33.663695   56868 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1104 11:34:33.663708   56868 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1104 11:34:33.663719   56868 command_runner.go:130] > # Each entry in the table should follow the format:
	I1104 11:34:33.663727   56868 command_runner.go:130] > #
	I1104 11:34:33.663735   56868 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1104 11:34:33.663745   56868 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1104 11:34:33.663770   56868 command_runner.go:130] > # runtime_type = "oci"
	I1104 11:34:33.663780   56868 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1104 11:34:33.663791   56868 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1104 11:34:33.663798   56868 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1104 11:34:33.663809   56868 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1104 11:34:33.663818   56868 command_runner.go:130] > # monitor_env = []
	I1104 11:34:33.663828   56868 command_runner.go:130] > # privileged_without_host_devices = false
	I1104 11:34:33.663837   56868 command_runner.go:130] > # allowed_annotations = []
	I1104 11:34:33.663850   56868 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1104 11:34:33.663858   56868 command_runner.go:130] > # Where:
	I1104 11:34:33.663863   56868 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1104 11:34:33.663875   56868 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1104 11:34:33.663888   56868 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1104 11:34:33.663901   56868 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1104 11:34:33.663911   56868 command_runner.go:130] > #   in $PATH.
	I1104 11:34:33.663924   56868 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1104 11:34:33.663934   56868 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1104 11:34:33.663947   56868 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1104 11:34:33.663953   56868 command_runner.go:130] > #   state.
	I1104 11:34:33.663961   56868 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1104 11:34:33.663973   56868 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1104 11:34:33.663988   56868 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1104 11:34:33.664000   56868 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1104 11:34:33.664013   56868 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1104 11:34:33.664027   56868 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1104 11:34:33.664041   56868 command_runner.go:130] > #   The currently recognized values are:
	I1104 11:34:33.664052   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1104 11:34:33.664063   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1104 11:34:33.664076   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1104 11:34:33.664088   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1104 11:34:33.664104   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1104 11:34:33.664117   56868 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1104 11:34:33.664131   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1104 11:34:33.664143   56868 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1104 11:34:33.664154   56868 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1104 11:34:33.664163   56868 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1104 11:34:33.664174   56868 command_runner.go:130] > #   deprecated option "conmon".
	I1104 11:34:33.664192   56868 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1104 11:34:33.664204   56868 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1104 11:34:33.664218   56868 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1104 11:34:33.664229   56868 command_runner.go:130] > #   should be moved to the container's cgroup
	I1104 11:34:33.664242   56868 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1104 11:34:33.664252   56868 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1104 11:34:33.664262   56868 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1104 11:34:33.664272   56868 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1104 11:34:33.664281   56868 command_runner.go:130] > #
	I1104 11:34:33.664289   56868 command_runner.go:130] > # Using the seccomp notifier feature:
	I1104 11:34:33.664300   56868 command_runner.go:130] > #
	I1104 11:34:33.664313   56868 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1104 11:34:33.664326   56868 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1104 11:34:33.664333   56868 command_runner.go:130] > #
	I1104 11:34:33.664346   56868 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1104 11:34:33.664355   56868 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1104 11:34:33.664362   56868 command_runner.go:130] > #
	I1104 11:34:33.664372   56868 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1104 11:34:33.664382   56868 command_runner.go:130] > # feature.
	I1104 11:34:33.664388   56868 command_runner.go:130] > #
	I1104 11:34:33.664401   56868 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1104 11:34:33.664413   56868 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1104 11:34:33.664427   56868 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1104 11:34:33.664443   56868 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1104 11:34:33.664453   56868 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1104 11:34:33.664459   56868 command_runner.go:130] > #
	I1104 11:34:33.664469   56868 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1104 11:34:33.664482   56868 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1104 11:34:33.664490   56868 command_runner.go:130] > #
	I1104 11:34:33.664504   56868 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1104 11:34:33.664516   56868 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1104 11:34:33.664524   56868 command_runner.go:130] > #
	I1104 11:34:33.664535   56868 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1104 11:34:33.664546   56868 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1104 11:34:33.664553   56868 command_runner.go:130] > # limitation.
	I1104 11:34:33.664560   56868 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1104 11:34:33.664570   56868 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1104 11:34:33.664580   56868 command_runner.go:130] > runtime_type = "oci"
	I1104 11:34:33.664589   56868 command_runner.go:130] > runtime_root = "/run/runc"
	I1104 11:34:33.664598   56868 command_runner.go:130] > runtime_config_path = ""
	I1104 11:34:33.664609   56868 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1104 11:34:33.664618   56868 command_runner.go:130] > monitor_cgroup = "pod"
	I1104 11:34:33.664628   56868 command_runner.go:130] > monitor_exec_cgroup = ""
	I1104 11:34:33.664635   56868 command_runner.go:130] > monitor_env = [
	I1104 11:34:33.664642   56868 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1104 11:34:33.664650   56868 command_runner.go:130] > ]
	I1104 11:34:33.664661   56868 command_runner.go:130] > privileged_without_host_devices = false
	I1104 11:34:33.664674   56868 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1104 11:34:33.664685   56868 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1104 11:34:33.664698   56868 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1104 11:34:33.664713   56868 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1104 11:34:33.664727   56868 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1104 11:34:33.664734   56868 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1104 11:34:33.664754   56868 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1104 11:34:33.664770   56868 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1104 11:34:33.664780   56868 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1104 11:34:33.664792   56868 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1104 11:34:33.664798   56868 command_runner.go:130] > # Example:
	I1104 11:34:33.664805   56868 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1104 11:34:33.664813   56868 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1104 11:34:33.664828   56868 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1104 11:34:33.664834   56868 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1104 11:34:33.664838   56868 command_runner.go:130] > # cpuset = 0
	I1104 11:34:33.664843   56868 command_runner.go:130] > # cpushares = "0-1"
	I1104 11:34:33.664848   56868 command_runner.go:130] > # Where:
	I1104 11:34:33.664855   56868 command_runner.go:130] > # The workload name is workload-type.
	I1104 11:34:33.664866   56868 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1104 11:34:33.664875   56868 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1104 11:34:33.664883   56868 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1104 11:34:33.664895   56868 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1104 11:34:33.664905   56868 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1104 11:34:33.664912   56868 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1104 11:34:33.664920   56868 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1104 11:34:33.664924   56868 command_runner.go:130] > # Default value is set to true
	I1104 11:34:33.664930   56868 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1104 11:34:33.664939   56868 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1104 11:34:33.664948   56868 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1104 11:34:33.664956   56868 command_runner.go:130] > # Default value is set to 'false'
	I1104 11:34:33.664967   56868 command_runner.go:130] > # disable_hostport_mapping = false
	I1104 11:34:33.664981   56868 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1104 11:34:33.664988   56868 command_runner.go:130] > #
	I1104 11:34:33.664998   56868 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1104 11:34:33.665007   56868 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1104 11:34:33.665016   56868 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1104 11:34:33.665029   56868 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1104 11:34:33.665041   56868 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1104 11:34:33.665049   56868 command_runner.go:130] > [crio.image]
	I1104 11:34:33.665062   56868 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1104 11:34:33.665071   56868 command_runner.go:130] > # default_transport = "docker://"
	I1104 11:34:33.665084   56868 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1104 11:34:33.665093   56868 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1104 11:34:33.665102   56868 command_runner.go:130] > # global_auth_file = ""
	I1104 11:34:33.665113   56868 command_runner.go:130] > # The image used to instantiate infra containers.
	I1104 11:34:33.665123   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.665134   56868 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1104 11:34:33.665148   56868 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1104 11:34:33.665159   56868 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1104 11:34:33.665170   56868 command_runner.go:130] > # This option supports live configuration reload.
	I1104 11:34:33.665185   56868 command_runner.go:130] > # pause_image_auth_file = ""
	I1104 11:34:33.665197   56868 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1104 11:34:33.665209   56868 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1104 11:34:33.665222   56868 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1104 11:34:33.665247   56868 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1104 11:34:33.665257   56868 command_runner.go:130] > # pause_command = "/pause"
	I1104 11:34:33.665267   56868 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1104 11:34:33.665280   56868 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1104 11:34:33.665292   56868 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1104 11:34:33.665306   56868 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1104 11:34:33.665319   56868 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1104 11:34:33.665333   56868 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1104 11:34:33.665342   56868 command_runner.go:130] > # pinned_images = [
	I1104 11:34:33.665350   56868 command_runner.go:130] > # ]
	I1104 11:34:33.665362   56868 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1104 11:34:33.665372   56868 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1104 11:34:33.665382   56868 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1104 11:34:33.665395   56868 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1104 11:34:33.665407   56868 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1104 11:34:33.665417   56868 command_runner.go:130] > # signature_policy = ""
	I1104 11:34:33.665428   56868 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1104 11:34:33.665441   56868 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1104 11:34:33.665451   56868 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1104 11:34:33.665464   56868 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1104 11:34:33.665475   56868 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1104 11:34:33.665484   56868 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1104 11:34:33.665496   56868 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1104 11:34:33.665528   56868 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1104 11:34:33.665540   56868 command_runner.go:130] > # changing them here.
	I1104 11:34:33.665546   56868 command_runner.go:130] > # insecure_registries = [
	I1104 11:34:33.665552   56868 command_runner.go:130] > # ]
	I1104 11:34:33.665564   56868 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1104 11:34:33.665574   56868 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1104 11:34:33.665581   56868 command_runner.go:130] > # image_volumes = "mkdir"
	I1104 11:34:33.665589   56868 command_runner.go:130] > # Temporary directory to use for storing big files
	I1104 11:34:33.665598   56868 command_runner.go:130] > # big_files_temporary_dir = ""
	I1104 11:34:33.665614   56868 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1104 11:34:33.665623   56868 command_runner.go:130] > # CNI plugins.
	I1104 11:34:33.665632   56868 command_runner.go:130] > [crio.network]
	I1104 11:34:33.665643   56868 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1104 11:34:33.665655   56868 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1104 11:34:33.665663   56868 command_runner.go:130] > # cni_default_network = ""
	I1104 11:34:33.665669   56868 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1104 11:34:33.665679   56868 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1104 11:34:33.665693   56868 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1104 11:34:33.665703   56868 command_runner.go:130] > # plugin_dirs = [
	I1104 11:34:33.665712   56868 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1104 11:34:33.665720   56868 command_runner.go:130] > # ]
	I1104 11:34:33.665733   56868 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1104 11:34:33.665741   56868 command_runner.go:130] > [crio.metrics]
	I1104 11:34:33.665752   56868 command_runner.go:130] > # Globally enable or disable metrics support.
	I1104 11:34:33.665759   56868 command_runner.go:130] > enable_metrics = true
	I1104 11:34:33.665764   56868 command_runner.go:130] > # Specify enabled metrics collectors.
	I1104 11:34:33.665773   56868 command_runner.go:130] > # Per default all metrics are enabled.
	I1104 11:34:33.665786   56868 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1104 11:34:33.665799   56868 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1104 11:34:33.665812   56868 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1104 11:34:33.665821   56868 command_runner.go:130] > # metrics_collectors = [
	I1104 11:34:33.665829   56868 command_runner.go:130] > # 	"operations",
	I1104 11:34:33.665840   56868 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1104 11:34:33.665849   56868 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1104 11:34:33.665855   56868 command_runner.go:130] > # 	"operations_errors",
	I1104 11:34:33.665861   56868 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1104 11:34:33.665871   56868 command_runner.go:130] > # 	"image_pulls_by_name",
	I1104 11:34:33.665882   56868 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1104 11:34:33.665892   56868 command_runner.go:130] > # 	"image_pulls_failures",
	I1104 11:34:33.665901   56868 command_runner.go:130] > # 	"image_pulls_successes",
	I1104 11:34:33.665911   56868 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1104 11:34:33.665920   56868 command_runner.go:130] > # 	"image_layer_reuse",
	I1104 11:34:33.665930   56868 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1104 11:34:33.665940   56868 command_runner.go:130] > # 	"containers_oom_total",
	I1104 11:34:33.665948   56868 command_runner.go:130] > # 	"containers_oom",
	I1104 11:34:33.665952   56868 command_runner.go:130] > # 	"processes_defunct",
	I1104 11:34:33.665960   56868 command_runner.go:130] > # 	"operations_total",
	I1104 11:34:33.665970   56868 command_runner.go:130] > # 	"operations_latency_seconds",
	I1104 11:34:33.665981   56868 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1104 11:34:33.665988   56868 command_runner.go:130] > # 	"operations_errors_total",
	I1104 11:34:33.665999   56868 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1104 11:34:33.666010   56868 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1104 11:34:33.666020   56868 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1104 11:34:33.666029   56868 command_runner.go:130] > # 	"image_pulls_success_total",
	I1104 11:34:33.666041   56868 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1104 11:34:33.666049   56868 command_runner.go:130] > # 	"containers_oom_count_total",
	I1104 11:34:33.666054   56868 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1104 11:34:33.666063   56868 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1104 11:34:33.666072   56868 command_runner.go:130] > # ]
	I1104 11:34:33.666083   56868 command_runner.go:130] > # The port on which the metrics server will listen.
	I1104 11:34:33.666093   56868 command_runner.go:130] > # metrics_port = 9090
	I1104 11:34:33.666103   56868 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1104 11:34:33.666111   56868 command_runner.go:130] > # metrics_socket = ""
	I1104 11:34:33.666122   56868 command_runner.go:130] > # The certificate for the secure metrics server.
	I1104 11:34:33.666134   56868 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1104 11:34:33.666142   56868 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1104 11:34:33.666152   56868 command_runner.go:130] > # certificate on any modification event.
	I1104 11:34:33.666162   56868 command_runner.go:130] > # metrics_cert = ""
	I1104 11:34:33.666171   56868 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1104 11:34:33.666186   56868 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1104 11:34:33.666195   56868 command_runner.go:130] > # metrics_key = ""
	I1104 11:34:33.666208   56868 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1104 11:34:33.666216   56868 command_runner.go:130] > [crio.tracing]
	I1104 11:34:33.666227   56868 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1104 11:34:33.666234   56868 command_runner.go:130] > # enable_tracing = false
	I1104 11:34:33.666241   56868 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1104 11:34:33.666251   56868 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1104 11:34:33.666264   56868 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1104 11:34:33.666275   56868 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1104 11:34:33.666285   56868 command_runner.go:130] > # CRI-O NRI configuration.
	I1104 11:34:33.666293   56868 command_runner.go:130] > [crio.nri]
	I1104 11:34:33.666303   56868 command_runner.go:130] > # Globally enable or disable NRI.
	I1104 11:34:33.666312   56868 command_runner.go:130] > # enable_nri = false
	I1104 11:34:33.666323   56868 command_runner.go:130] > # NRI socket to listen on.
	I1104 11:34:33.666331   56868 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1104 11:34:33.666338   56868 command_runner.go:130] > # NRI plugin directory to use.
	I1104 11:34:33.666345   56868 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1104 11:34:33.666356   56868 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1104 11:34:33.666368   56868 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1104 11:34:33.666380   56868 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1104 11:34:33.666390   56868 command_runner.go:130] > # nri_disable_connections = false
	I1104 11:34:33.666402   56868 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1104 11:34:33.666412   56868 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1104 11:34:33.666421   56868 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1104 11:34:33.666430   56868 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1104 11:34:33.666441   56868 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1104 11:34:33.666450   56868 command_runner.go:130] > [crio.stats]
	I1104 11:34:33.666463   56868 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1104 11:34:33.666476   56868 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1104 11:34:33.666485   56868 command_runner.go:130] > # stats_collection_period = 0
	I1104 11:34:33.666560   56868 cni.go:84] Creating CNI manager for ""
	I1104 11:34:33.666572   56868 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1104 11:34:33.666586   56868 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:34:33.666617   56868 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-453447 NodeName:multinode-453447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:34:33.666758   56868 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-453447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.86"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:34:33.666828   56868 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:34:33.680200   56868 command_runner.go:130] > kubeadm
	I1104 11:34:33.680218   56868 command_runner.go:130] > kubectl
	I1104 11:34:33.680222   56868 command_runner.go:130] > kubelet
	I1104 11:34:33.680238   56868 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:34:33.680284   56868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 11:34:33.696236   56868 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1104 11:34:33.712481   56868 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:34:33.732373   56868 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1104 11:34:33.755587   56868 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I1104 11:34:33.759571   56868 command_runner.go:130] > 192.168.39.86	control-plane.minikube.internal
	I1104 11:34:33.759698   56868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:34:33.902732   56868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:34:33.917635   56868 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447 for IP: 192.168.39.86
	I1104 11:34:33.917661   56868 certs.go:194] generating shared ca certs ...
	I1104 11:34:33.917677   56868 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:34:33.917824   56868 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:34:33.917861   56868 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:34:33.917870   56868 certs.go:256] generating profile certs ...
	I1104 11:34:33.917946   56868 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/client.key
	I1104 11:34:33.918000   56868 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.key.a4bcad16
	I1104 11:34:33.918035   56868 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.key
	I1104 11:34:33.918049   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1104 11:34:33.918064   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1104 11:34:33.918078   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1104 11:34:33.918091   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1104 11:34:33.918102   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1104 11:34:33.918116   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1104 11:34:33.918129   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1104 11:34:33.918150   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1104 11:34:33.918214   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:34:33.918244   56868 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:34:33.918254   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:34:33.918276   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:34:33.918299   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:34:33.918319   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:34:33.918362   56868 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:34:33.918390   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:33.918404   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem -> /usr/share/ca-certificates/27218.pem
	I1104 11:34:33.918416   56868 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> /usr/share/ca-certificates/272182.pem
	I1104 11:34:33.918995   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:34:33.942790   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:34:33.965424   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:34:33.987656   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:34:34.009514   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 11:34:34.031621   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 11:34:34.053136   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:34:34.075155   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/multinode-453447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 11:34:34.097203   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:34:34.119468   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:34:34.141351   56868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:34:34.163697   56868 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:34:34.178861   56868 ssh_runner.go:195] Run: openssl version
	I1104 11:34:34.184143   56868 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1104 11:34:34.184216   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:34:34.194005   56868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.197991   56868 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.198017   56868 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.198065   56868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:34:34.203174   56868 command_runner.go:130] > 51391683
	I1104 11:34:34.203349   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:34:34.212228   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:34:34.222142   56868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.226314   56868 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.226397   56868 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.226450   56868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:34:34.231493   56868 command_runner.go:130] > 3ec20f2e
	I1104 11:34:34.231675   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:34:34.240214   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:34:34.249829   56868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.253918   56868 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.253966   56868 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.254014   56868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:34:34.259194   56868 command_runner.go:130] > b5213941
	I1104 11:34:34.259256   56868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:34:34.268333   56868 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:34:34.272466   56868 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:34:34.272484   56868 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1104 11:34:34.272490   56868 command_runner.go:130] > Device: 253,1	Inode: 2103342     Links: 1
	I1104 11:34:34.272496   56868 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1104 11:34:34.272501   56868 command_runner.go:130] > Access: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272507   56868 command_runner.go:130] > Modify: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272511   56868 command_runner.go:130] > Change: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272518   56868 command_runner.go:130] >  Birth: 2024-11-04 11:27:55.725342799 +0000
	I1104 11:34:34.272594   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:34:34.277815   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.277879   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:34:34.283397   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.283459   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:34:34.288654   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.288724   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:34:34.294062   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.294129   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:34:34.299393   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.299448   56868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:34:34.304602   56868 command_runner.go:130] > Certificate will not expire
	I1104 11:34:34.304687   56868 kubeadm.go:392] StartCluster: {Name:multinode-453447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-453447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.117 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:34:34.304840   56868 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:34:34.304889   56868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:34:34.344099   56868 command_runner.go:130] > 8a71222d5e581bb2b8728a7bcc9b4092a22c6c7ddc2c7d5023ade850762313f2
	I1104 11:34:34.344126   56868 command_runner.go:130] > 13caef919da51a59ffeaf36c6198834dad2f51b54e8563121eb2bc2af62c9cba
	I1104 11:34:34.344132   56868 command_runner.go:130] > de832022d6a38f88d7fd4047e8e958bf8e29b8fd978f142700057256faff3dec
	I1104 11:34:34.344163   56868 command_runner.go:130] > 99187137d9b7622002f0c0edfa61cc3a605bd192056d9dd41b76baa15a798bc8
	I1104 11:34:34.344171   56868 command_runner.go:130] > 35ccc2e48ce6474be6e4ee62791f070236db849079eceb9d822817335ef62ca2
	I1104 11:34:34.344177   56868 command_runner.go:130] > 055d0d197ecfb4073e33727aa7d16bd21fa1bdb545dbc98889bdd63ac57785d6
	I1104 11:34:34.344185   56868 command_runner.go:130] > 6dc6ffa76cf341c78007aee47131c05761173bd60c8a2c834d2760ec4acf6c97
	I1104 11:34:34.344203   56868 command_runner.go:130] > 65c4627bd34af9f0ea03ad0892507644b87124b3e06845b239cfaa268faf1d21
	I1104 11:34:34.344231   56868 cri.go:89] found id: "8a71222d5e581bb2b8728a7bcc9b4092a22c6c7ddc2c7d5023ade850762313f2"
	I1104 11:34:34.344239   56868 cri.go:89] found id: "13caef919da51a59ffeaf36c6198834dad2f51b54e8563121eb2bc2af62c9cba"
	I1104 11:34:34.344243   56868 cri.go:89] found id: "de832022d6a38f88d7fd4047e8e958bf8e29b8fd978f142700057256faff3dec"
	I1104 11:34:34.344247   56868 cri.go:89] found id: "99187137d9b7622002f0c0edfa61cc3a605bd192056d9dd41b76baa15a798bc8"
	I1104 11:34:34.344250   56868 cri.go:89] found id: "35ccc2e48ce6474be6e4ee62791f070236db849079eceb9d822817335ef62ca2"
	I1104 11:34:34.344256   56868 cri.go:89] found id: "055d0d197ecfb4073e33727aa7d16bd21fa1bdb545dbc98889bdd63ac57785d6"
	I1104 11:34:34.344259   56868 cri.go:89] found id: "6dc6ffa76cf341c78007aee47131c05761173bd60c8a2c834d2760ec4acf6c97"
	I1104 11:34:34.344264   56868 cri.go:89] found id: "65c4627bd34af9f0ea03ad0892507644b87124b3e06845b239cfaa268faf1d21"
	I1104 11:34:34.344266   56868 cri.go:89] found id: ""
	I1104 11:34:34.344304   56868 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-453447 -n multinode-453447
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-453447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.92s)

                                                
                                    
x
+
TestPreload (168.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-666574 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-666574 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.41317244s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-666574 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-666574 image pull gcr.io/k8s-minikube/busybox: (2.214229596s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-666574
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-666574: (6.564194293s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-666574 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1104 11:44:47.409046   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-666574 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (59.833704108s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-666574 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-11-04 11:45:33.76132191 +0000 UTC m=+4125.116380714
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-666574 -n test-preload-666574
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-666574 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447 sudo cat                                       | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m03_multinode-453447.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt                       | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m02:/home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n                                                                 | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | multinode-453447-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-453447 ssh -n multinode-453447-m02 sudo cat                                   | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | /home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-453447 node stop m03                                                          | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	| node    | multinode-453447 node start                                                             | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC | 04 Nov 24 11:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-453447                                                                | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC |                     |
	| stop    | -p multinode-453447                                                                     | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:30 UTC |                     |
	| start   | -p multinode-453447                                                                     | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:33 UTC | 04 Nov 24 11:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-453447                                                                | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:36 UTC |                     |
	| node    | multinode-453447 node delete                                                            | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:36 UTC | 04 Nov 24 11:36 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-453447 stop                                                                   | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:36 UTC |                     |
	| start   | -p multinode-453447                                                                     | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:38 UTC | 04 Nov 24 11:42 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-453447                                                                | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:42 UTC |                     |
	| start   | -p multinode-453447-m02                                                                 | multinode-453447-m02 | jenkins | v1.34.0 | 04 Nov 24 11:42 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-453447-m03                                                                 | multinode-453447-m03 | jenkins | v1.34.0 | 04 Nov 24 11:42 UTC | 04 Nov 24 11:42 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-453447                                                                 | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:42 UTC |                     |
	| delete  | -p multinode-453447-m03                                                                 | multinode-453447-m03 | jenkins | v1.34.0 | 04 Nov 24 11:42 UTC | 04 Nov 24 11:42 UTC |
	| delete  | -p multinode-453447                                                                     | multinode-453447     | jenkins | v1.34.0 | 04 Nov 24 11:42 UTC | 04 Nov 24 11:42 UTC |
	| start   | -p test-preload-666574                                                                  | test-preload-666574  | jenkins | v1.34.0 | 04 Nov 24 11:42 UTC | 04 Nov 24 11:44 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-666574 image pull                                                          | test-preload-666574  | jenkins | v1.34.0 | 04 Nov 24 11:44 UTC | 04 Nov 24 11:44 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-666574                                                                  | test-preload-666574  | jenkins | v1.34.0 | 04 Nov 24 11:44 UTC | 04 Nov 24 11:44 UTC |
	| start   | -p test-preload-666574                                                                  | test-preload-666574  | jenkins | v1.34.0 | 04 Nov 24 11:44 UTC | 04 Nov 24 11:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-666574 image list                                                          | test-preload-666574  | jenkins | v1.34.0 | 04 Nov 24 11:45 UTC | 04 Nov 24 11:45 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:44:33
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:44:33.727932   61220 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:44:33.728188   61220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:44:33.728197   61220 out.go:358] Setting ErrFile to fd 2...
	I1104 11:44:33.728202   61220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:44:33.728437   61220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:44:33.728972   61220 out.go:352] Setting JSON to false
	I1104 11:44:33.729893   61220 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8825,"bootTime":1730711849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:44:33.729984   61220 start.go:139] virtualization: kvm guest
	I1104 11:44:33.732244   61220 out.go:177] * [test-preload-666574] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:44:33.733515   61220 notify.go:220] Checking for updates...
	I1104 11:44:33.733523   61220 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:44:33.734907   61220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:44:33.736218   61220 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:44:33.737401   61220 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:44:33.738617   61220 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:44:33.739891   61220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:44:33.741466   61220 config.go:182] Loaded profile config "test-preload-666574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1104 11:44:33.741835   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:44:33.741875   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:44:33.756683   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39955
	I1104 11:44:33.757220   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:44:33.757830   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:44:33.757847   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:44:33.758191   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:44:33.758368   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:44:33.760188   61220 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 11:44:33.761535   61220 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:44:33.761840   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:44:33.761896   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:44:33.776706   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I1104 11:44:33.777255   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:44:33.777778   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:44:33.777802   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:44:33.778177   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:44:33.778457   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:44:33.814525   61220 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 11:44:33.815834   61220 start.go:297] selected driver: kvm2
	I1104 11:44:33.815847   61220 start.go:901] validating driver "kvm2" against &{Name:test-preload-666574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-666574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:44:33.815941   61220 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:44:33.816703   61220 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:44:33.816796   61220 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:44:33.831998   61220 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:44:33.832373   61220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:44:33.832399   61220 cni.go:84] Creating CNI manager for ""
	I1104 11:44:33.832426   61220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:44:33.832479   61220 start.go:340] cluster config:
	{Name:test-preload-666574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-666574 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:44:33.832566   61220 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:44:33.835136   61220 out.go:177] * Starting "test-preload-666574" primary control-plane node in "test-preload-666574" cluster
	I1104 11:44:33.836290   61220 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1104 11:44:33.866069   61220 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1104 11:44:33.866099   61220 cache.go:56] Caching tarball of preloaded images
	I1104 11:44:33.866292   61220 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1104 11:44:33.867934   61220 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1104 11:44:33.869183   61220 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1104 11:44:33.900462   61220 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1104 11:44:39.302842   61220 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1104 11:44:39.302944   61220 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1104 11:44:40.138458   61220 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1104 11:44:40.138574   61220 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/config.json ...
	I1104 11:44:40.138797   61220 start.go:360] acquireMachinesLock for test-preload-666574: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:44:40.138854   61220 start.go:364] duration metric: took 36.827µs to acquireMachinesLock for "test-preload-666574"
	I1104 11:44:40.138866   61220 start.go:96] Skipping create...Using existing machine configuration
	I1104 11:44:40.138871   61220 fix.go:54] fixHost starting: 
	I1104 11:44:40.139177   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:44:40.139210   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:44:40.153755   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I1104 11:44:40.154137   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:44:40.154613   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:44:40.154632   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:44:40.154952   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:44:40.155119   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:44:40.155268   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetState
	I1104 11:44:40.157142   61220 fix.go:112] recreateIfNeeded on test-preload-666574: state=Stopped err=<nil>
	I1104 11:44:40.157167   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	W1104 11:44:40.157342   61220 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 11:44:40.159303   61220 out.go:177] * Restarting existing kvm2 VM for "test-preload-666574" ...
	I1104 11:44:40.160478   61220 main.go:141] libmachine: (test-preload-666574) Calling .Start
	I1104 11:44:40.160656   61220 main.go:141] libmachine: (test-preload-666574) Ensuring networks are active...
	I1104 11:44:40.161428   61220 main.go:141] libmachine: (test-preload-666574) Ensuring network default is active
	I1104 11:44:40.161698   61220 main.go:141] libmachine: (test-preload-666574) Ensuring network mk-test-preload-666574 is active
	I1104 11:44:40.161976   61220 main.go:141] libmachine: (test-preload-666574) Getting domain xml...
	I1104 11:44:40.162675   61220 main.go:141] libmachine: (test-preload-666574) Creating domain...
	I1104 11:44:41.360054   61220 main.go:141] libmachine: (test-preload-666574) Waiting to get IP...
	I1104 11:44:41.360913   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:41.361288   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:41.361388   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:41.361285   61271 retry.go:31] will retry after 258.717253ms: waiting for machine to come up
	I1104 11:44:41.621886   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:41.622395   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:41.622428   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:41.622335   61271 retry.go:31] will retry after 348.374291ms: waiting for machine to come up
	I1104 11:44:41.971845   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:41.972199   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:41.972215   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:41.972168   61271 retry.go:31] will retry after 440.899428ms: waiting for machine to come up
	I1104 11:44:42.414905   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:42.415310   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:42.415336   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:42.415271   61271 retry.go:31] will retry after 566.772995ms: waiting for machine to come up
	I1104 11:44:42.984200   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:42.984647   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:42.984675   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:42.984603   61271 retry.go:31] will retry after 669.493577ms: waiting for machine to come up
	I1104 11:44:43.655391   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:43.655774   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:43.655791   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:43.655734   61271 retry.go:31] will retry after 790.342669ms: waiting for machine to come up
	I1104 11:44:44.447178   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:44.447563   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:44.447589   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:44.447516   61271 retry.go:31] will retry after 1.021411765s: waiting for machine to come up
	I1104 11:44:45.471054   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:45.471441   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:45.471481   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:45.471400   61271 retry.go:31] will retry after 1.226618859s: waiting for machine to come up
	I1104 11:44:46.699713   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:46.700105   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:46.700133   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:46.700055   61271 retry.go:31] will retry after 1.355096786s: waiting for machine to come up
	I1104 11:44:48.057681   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:48.058097   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:48.058126   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:48.058053   61271 retry.go:31] will retry after 2.056015518s: waiting for machine to come up
	I1104 11:44:50.115398   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:50.115873   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:50.115891   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:50.115827   61271 retry.go:31] will retry after 2.86286691s: waiting for machine to come up
	I1104 11:44:52.982270   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:52.982583   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:52.982604   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:52.982548   61271 retry.go:31] will retry after 2.383270359s: waiting for machine to come up
	I1104 11:44:55.367679   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:55.368057   61220 main.go:141] libmachine: (test-preload-666574) DBG | unable to find current IP address of domain test-preload-666574 in network mk-test-preload-666574
	I1104 11:44:55.368085   61220 main.go:141] libmachine: (test-preload-666574) DBG | I1104 11:44:55.368023   61271 retry.go:31] will retry after 3.45378401s: waiting for machine to come up
	I1104 11:44:58.825503   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.825967   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has current primary IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.825984   61220 main.go:141] libmachine: (test-preload-666574) Found IP for machine: 192.168.39.248
	I1104 11:44:58.825992   61220 main.go:141] libmachine: (test-preload-666574) Reserving static IP address...
	I1104 11:44:58.826354   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "test-preload-666574", mac: "52:54:00:71:95:f8", ip: "192.168.39.248"} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:58.826380   61220 main.go:141] libmachine: (test-preload-666574) Reserved static IP address: 192.168.39.248
	I1104 11:44:58.826396   61220 main.go:141] libmachine: (test-preload-666574) DBG | skip adding static IP to network mk-test-preload-666574 - found existing host DHCP lease matching {name: "test-preload-666574", mac: "52:54:00:71:95:f8", ip: "192.168.39.248"}
	I1104 11:44:58.826403   61220 main.go:141] libmachine: (test-preload-666574) Waiting for SSH to be available...
	I1104 11:44:58.826418   61220 main.go:141] libmachine: (test-preload-666574) DBG | Getting to WaitForSSH function...
	I1104 11:44:58.828680   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.829035   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:58.829052   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.829221   61220 main.go:141] libmachine: (test-preload-666574) DBG | Using SSH client type: external
	I1104 11:44:58.829258   61220 main.go:141] libmachine: (test-preload-666574) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa (-rw-------)
	I1104 11:44:58.829288   61220 main.go:141] libmachine: (test-preload-666574) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 11:44:58.829297   61220 main.go:141] libmachine: (test-preload-666574) DBG | About to run SSH command:
	I1104 11:44:58.829305   61220 main.go:141] libmachine: (test-preload-666574) DBG | exit 0
	I1104 11:44:58.952966   61220 main.go:141] libmachine: (test-preload-666574) DBG | SSH cmd err, output: <nil>: 
	I1104 11:44:58.953313   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetConfigRaw
	I1104 11:44:58.953906   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetIP
	I1104 11:44:58.956838   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.957199   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:58.957237   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.957496   61220 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/config.json ...
	I1104 11:44:58.957675   61220 machine.go:93] provisionDockerMachine start ...
	I1104 11:44:58.957690   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:44:58.957893   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:44:58.960230   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.960534   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:58.960562   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:58.960700   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:44:58.960860   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:58.961004   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:58.961150   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:44:58.961286   61220 main.go:141] libmachine: Using SSH client type: native
	I1104 11:44:58.961499   61220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1104 11:44:58.961510   61220 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 11:44:59.065389   61220 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 11:44:59.065425   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetMachineName
	I1104 11:44:59.065686   61220 buildroot.go:166] provisioning hostname "test-preload-666574"
	I1104 11:44:59.065712   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetMachineName
	I1104 11:44:59.065915   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:44:59.068747   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.069082   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:59.069104   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.069333   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:44:59.069528   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.069643   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.069752   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:44:59.069911   61220 main.go:141] libmachine: Using SSH client type: native
	I1104 11:44:59.070075   61220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1104 11:44:59.070087   61220 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-666574 && echo "test-preload-666574" | sudo tee /etc/hostname
	I1104 11:44:59.187720   61220 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-666574
	
	I1104 11:44:59.187754   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:44:59.190169   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.190449   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:59.190474   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.190684   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:44:59.190854   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.190999   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.191175   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:44:59.191311   61220 main.go:141] libmachine: Using SSH client type: native
	I1104 11:44:59.191467   61220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1104 11:44:59.191482   61220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-666574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-666574/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-666574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:44:59.301072   61220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:44:59.301109   61220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:44:59.301137   61220 buildroot.go:174] setting up certificates
	I1104 11:44:59.301147   61220 provision.go:84] configureAuth start
	I1104 11:44:59.301159   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetMachineName
	I1104 11:44:59.301433   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetIP
	I1104 11:44:59.303800   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.304109   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:59.304138   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.304318   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:44:59.306703   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.307029   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:59.307053   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.307217   61220 provision.go:143] copyHostCerts
	I1104 11:44:59.307269   61220 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:44:59.307281   61220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:44:59.307344   61220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:44:59.307439   61220 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:44:59.307446   61220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:44:59.307471   61220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:44:59.307525   61220 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:44:59.307532   61220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:44:59.307554   61220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:44:59.307601   61220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.test-preload-666574 san=[127.0.0.1 192.168.39.248 localhost minikube test-preload-666574]
	I1104 11:44:59.523724   61220 provision.go:177] copyRemoteCerts
	I1104 11:44:59.523778   61220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:44:59.523804   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:44:59.526353   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.526756   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:59.526785   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.526908   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:44:59.527102   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.527267   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:44:59.527382   61220 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa Username:docker}
	I1104 11:44:59.606893   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 11:44:59.630274   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:44:59.652610   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1104 11:44:59.674732   61220 provision.go:87] duration metric: took 373.565614ms to configureAuth
	I1104 11:44:59.674767   61220 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:44:59.674980   61220 config.go:182] Loaded profile config "test-preload-666574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1104 11:44:59.675078   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:44:59.677577   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.678026   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:59.678055   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.678186   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:44:59.678386   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.678498   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.678640   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:44:59.678794   61220 main.go:141] libmachine: Using SSH client type: native
	I1104 11:44:59.678941   61220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1104 11:44:59.678956   61220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:44:59.881209   61220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:44:59.881258   61220 machine.go:96] duration metric: took 923.570267ms to provisionDockerMachine
	I1104 11:44:59.881277   61220 start.go:293] postStartSetup for "test-preload-666574" (driver="kvm2")
	I1104 11:44:59.881292   61220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:44:59.881317   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:44:59.881638   61220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:44:59.881675   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:44:59.884093   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.884399   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:44:59.884428   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:44:59.884589   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:44:59.884810   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:44:59.884993   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:44:59.885135   61220 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa Username:docker}
	I1104 11:44:59.967224   61220 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:44:59.970935   61220 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:44:59.970960   61220 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:44:59.971038   61220 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:44:59.971129   61220 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:44:59.971230   61220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:44:59.979998   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:45:00.002553   61220 start.go:296] duration metric: took 121.260864ms for postStartSetup
	I1104 11:45:00.002594   61220 fix.go:56] duration metric: took 19.863721963s for fixHost
	I1104 11:45:00.002617   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:45:00.005326   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.005639   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:45:00.005693   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.005789   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:45:00.006005   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:45:00.006167   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:45:00.006338   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:45:00.006467   61220 main.go:141] libmachine: Using SSH client type: native
	I1104 11:45:00.006661   61220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1104 11:45:00.006676   61220 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:45:00.109946   61220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730720700.073810468
	
	I1104 11:45:00.109968   61220 fix.go:216] guest clock: 1730720700.073810468
	I1104 11:45:00.109975   61220 fix.go:229] Guest: 2024-11-04 11:45:00.073810468 +0000 UTC Remote: 2024-11-04 11:45:00.002599377 +0000 UTC m=+26.310327989 (delta=71.211091ms)
	I1104 11:45:00.109994   61220 fix.go:200] guest clock delta is within tolerance: 71.211091ms
	I1104 11:45:00.109999   61220 start.go:83] releasing machines lock for "test-preload-666574", held for 19.971137677s
	I1104 11:45:00.110019   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:45:00.110302   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetIP
	I1104 11:45:00.112778   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.113056   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:45:00.113085   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.113164   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:45:00.113652   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:45:00.113855   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:45:00.113944   61220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:45:00.113990   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:45:00.114043   61220 ssh_runner.go:195] Run: cat /version.json
	I1104 11:45:00.114064   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:45:00.116552   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.116847   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.116939   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:45:00.116962   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.117171   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:45:00.117274   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:45:00.117302   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:00.117387   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:45:00.117470   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:45:00.117543   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:45:00.117614   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:45:00.117652   61220 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa Username:docker}
	I1104 11:45:00.117717   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:45:00.117827   61220 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa Username:docker}
	I1104 11:45:00.216435   61220 ssh_runner.go:195] Run: systemctl --version
	I1104 11:45:00.222192   61220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:45:00.362763   61220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:45:00.368902   61220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:45:00.368979   61220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:45:00.384335   61220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 11:45:00.384359   61220 start.go:495] detecting cgroup driver to use...
	I1104 11:45:00.384450   61220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:45:00.399771   61220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:45:00.413736   61220 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:45:00.413806   61220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:45:00.427134   61220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:45:00.441053   61220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:45:00.554623   61220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:45:00.699355   61220 docker.go:233] disabling docker service ...
	I1104 11:45:00.699432   61220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:45:00.713105   61220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:45:00.725367   61220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:45:00.852634   61220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:45:00.974576   61220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:45:00.993918   61220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:45:01.011243   61220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1104 11:45:01.011302   61220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:45:01.020546   61220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:45:01.020598   61220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:45:01.029572   61220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:45:01.038422   61220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:45:01.047347   61220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:45:01.056696   61220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:45:01.065831   61220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:45:01.080919   61220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:45:01.089827   61220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:45:01.098187   61220 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 11:45:01.098244   61220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 11:45:01.110649   61220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:45:01.119106   61220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:45:01.231701   61220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:45:01.317502   61220 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:45:01.317561   61220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:45:01.322188   61220 start.go:563] Will wait 60s for crictl version
	I1104 11:45:01.322234   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:01.325526   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:45:01.360551   61220 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:45:01.360627   61220 ssh_runner.go:195] Run: crio --version
	I1104 11:45:01.385836   61220 ssh_runner.go:195] Run: crio --version
	I1104 11:45:01.413162   61220 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1104 11:45:01.414602   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetIP
	I1104 11:45:01.417107   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:01.417451   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:45:01.417478   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:01.417780   61220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:45:01.421561   61220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:45:01.433321   61220 kubeadm.go:883] updating cluster {Name:test-preload-666574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-666574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:45:01.433423   61220 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1104 11:45:01.433462   61220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:45:01.465913   61220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1104 11:45:01.465971   61220 ssh_runner.go:195] Run: which lz4
	I1104 11:45:01.469787   61220 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 11:45:01.473504   61220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 11:45:01.473531   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1104 11:45:02.848994   61220 crio.go:462] duration metric: took 1.37923737s to copy over tarball
	I1104 11:45:02.849057   61220 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 11:45:05.196544   61220 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347458425s)
	I1104 11:45:05.196574   61220 crio.go:469] duration metric: took 2.347555659s to extract the tarball
	I1104 11:45:05.196583   61220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 11:45:05.236054   61220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:45:05.278865   61220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1104 11:45:05.278890   61220 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 11:45:05.278954   61220 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:45:05.278962   61220 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1104 11:45:05.278985   61220 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1104 11:45:05.278996   61220 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1104 11:45:05.279006   61220 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1104 11:45:05.279034   61220 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1104 11:45:05.279042   61220 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1104 11:45:05.279075   61220 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1104 11:45:05.280481   61220 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1104 11:45:05.280497   61220 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1104 11:45:05.280501   61220 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1104 11:45:05.280497   61220 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1104 11:45:05.280523   61220 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1104 11:45:05.280525   61220 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1104 11:45:05.280592   61220 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1104 11:45:05.280725   61220 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:45:05.444917   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1104 11:45:05.446994   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1104 11:45:05.451194   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1104 11:45:05.451444   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1104 11:45:05.457153   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1104 11:45:05.465678   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1104 11:45:05.487611   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1104 11:45:05.555337   61220 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1104 11:45:05.555386   61220 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1104 11:45:05.555390   61220 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1104 11:45:05.555407   61220 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1104 11:45:05.555432   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:05.555441   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:05.586566   61220 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1104 11:45:05.586599   61220 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1104 11:45:05.586619   61220 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1104 11:45:05.586637   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:05.586679   61220 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1104 11:45:05.586641   61220 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1104 11:45:05.586702   61220 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1104 11:45:05.586706   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:05.586743   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:05.593002   61220 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1104 11:45:05.593050   61220 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1104 11:45:05.593101   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:05.604210   61220 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1104 11:45:05.604242   61220 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1104 11:45:05.604284   61220 ssh_runner.go:195] Run: which crictl
	I1104 11:45:05.604290   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1104 11:45:05.604290   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1104 11:45:05.604344   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1104 11:45:05.604368   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1104 11:45:05.604399   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1104 11:45:05.604553   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1104 11:45:05.615513   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1104 11:45:05.662998   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1104 11:45:05.757739   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1104 11:45:05.757779   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1104 11:45:05.757819   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1104 11:45:05.757892   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1104 11:45:05.757931   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1104 11:45:05.758010   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1104 11:45:05.778627   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1104 11:45:05.898080   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1104 11:45:05.904458   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1104 11:45:05.904528   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1104 11:45:05.904542   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1104 11:45:05.904588   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1104 11:45:05.904635   61220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1104 11:45:05.904657   61220 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1104 11:45:05.904730   61220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1104 11:45:05.996060   61220 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1104 11:45:05.996189   61220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1104 11:45:06.015510   61220 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1104 11:45:06.015602   61220 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1104 11:45:06.015617   61220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1104 11:45:06.015689   61220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1104 11:45:06.015754   61220 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1104 11:45:06.015829   61220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1104 11:45:06.027824   61220 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1104 11:45:06.027899   61220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1104 11:45:06.028773   61220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1104 11:45:06.028787   61220 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1104 11:45:06.028787   61220 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1104 11:45:06.028810   61220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1104 11:45:06.028816   61220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1104 11:45:06.028862   61220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1104 11:45:06.033219   61220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1104 11:45:06.033519   61220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1104 11:45:06.034932   61220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1104 11:45:06.034987   61220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1104 11:45:06.222700   61220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:45:08.781661   61220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.752826145s)
	I1104 11:45:08.781691   61220 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1104 11:45:08.781715   61220 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1104 11:45:08.781725   61220 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.752840209s)
	I1104 11:45:08.781755   61220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1104 11:45:08.781768   61220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1104 11:45:08.781780   61220 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.559049899s)
	I1104 11:45:10.826381   61220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.044587065s)
	I1104 11:45:10.826420   61220 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1104 11:45:10.826448   61220 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1104 11:45:10.826488   61220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1104 11:45:11.671778   61220 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1104 11:45:11.671817   61220 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1104 11:45:11.671873   61220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1104 11:45:12.420662   61220 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1104 11:45:12.420696   61220 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1104 11:45:12.420744   61220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1104 11:45:13.057811   61220 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1104 11:45:13.057863   61220 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1104 11:45:13.057923   61220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1104 11:45:13.404559   61220 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1104 11:45:13.404604   61220 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1104 11:45:13.404642   61220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1104 11:45:13.845743   61220 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1104 11:45:13.845784   61220 cache_images.go:123] Successfully loaded all cached images
	I1104 11:45:13.845789   61220 cache_images.go:92] duration metric: took 8.566883734s to LoadCachedImages
	I1104 11:45:13.845799   61220 kubeadm.go:934] updating node { 192.168.39.248 8443 v1.24.4 crio true true} ...
	I1104 11:45:13.845919   61220 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-666574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-666574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:45:13.846002   61220 ssh_runner.go:195] Run: crio config
	I1104 11:45:13.899488   61220 cni.go:84] Creating CNI manager for ""
	I1104 11:45:13.899507   61220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:45:13.899516   61220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:45:13.899533   61220 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-666574 NodeName:test-preload-666574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:45:13.899661   61220 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-666574"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:45:13.899730   61220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1104 11:45:13.909477   61220 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:45:13.909540   61220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 11:45:13.918034   61220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1104 11:45:13.933439   61220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:45:13.947922   61220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1104 11:45:13.963102   61220 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I1104 11:45:13.966358   61220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:45:13.977280   61220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:45:14.099210   61220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:45:14.115056   61220 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574 for IP: 192.168.39.248
	I1104 11:45:14.115077   61220 certs.go:194] generating shared ca certs ...
	I1104 11:45:14.115095   61220 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:45:14.115246   61220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:45:14.115283   61220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:45:14.115292   61220 certs.go:256] generating profile certs ...
	I1104 11:45:14.115405   61220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/client.key
	I1104 11:45:14.115564   61220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/apiserver.key.a6ac4606
	I1104 11:45:14.115655   61220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/proxy-client.key
	I1104 11:45:14.115799   61220 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:45:14.115839   61220 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:45:14.115851   61220 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:45:14.115880   61220 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:45:14.115912   61220 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:45:14.115940   61220 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:45:14.115992   61220 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:45:14.116821   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:45:14.167885   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:45:14.207225   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:45:14.236996   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:45:14.265174   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1104 11:45:14.304293   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:45:14.327951   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:45:14.350378   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 11:45:14.372384   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:45:14.393838   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:45:14.415484   61220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:45:14.436786   61220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:45:14.452015   61220 ssh_runner.go:195] Run: openssl version
	I1104 11:45:14.457729   61220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:45:14.467677   61220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:45:14.471885   61220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:45:14.471921   61220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:45:14.477319   61220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:45:14.487382   61220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:45:14.497455   61220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:45:14.501667   61220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:45:14.501712   61220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:45:14.506989   61220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:45:14.517011   61220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:45:14.527308   61220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:45:14.531580   61220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:45:14.531622   61220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:45:14.536889   61220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:45:14.547033   61220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:45:14.551406   61220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 11:45:14.557032   61220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 11:45:14.562753   61220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 11:45:14.568636   61220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 11:45:14.574269   61220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 11:45:14.579845   61220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 11:45:14.585427   61220 kubeadm.go:392] StartCluster: {Name:test-preload-666574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-666574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:45:14.585502   61220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:45:14.585551   61220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:45:14.624042   61220 cri.go:89] found id: ""
	I1104 11:45:14.624105   61220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 11:45:14.633824   61220 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 11:45:14.633847   61220 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 11:45:14.633897   61220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 11:45:14.643312   61220 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:45:14.643688   61220 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-666574" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:45:14.643778   61220 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-666574" cluster setting kubeconfig missing "test-preload-666574" context setting]
	I1104 11:45:14.644039   61220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:45:14.644576   61220 kapi.go:59] client config for test-preload-666574: &rest.Config{Host:"https://192.168.39.248:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 11:45:14.645124   61220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 11:45:14.653630   61220 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.248
	I1104 11:45:14.653659   61220 kubeadm.go:1160] stopping kube-system containers ...
	I1104 11:45:14.653673   61220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 11:45:14.653732   61220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:45:14.687739   61220 cri.go:89] found id: ""
	I1104 11:45:14.687804   61220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 11:45:14.702991   61220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 11:45:14.712081   61220 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 11:45:14.712100   61220 kubeadm.go:157] found existing configuration files:
	
	I1104 11:45:14.712144   61220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 11:45:14.720694   61220 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 11:45:14.720744   61220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 11:45:14.729257   61220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 11:45:14.737350   61220 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 11:45:14.737402   61220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 11:45:14.745709   61220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 11:45:14.754200   61220 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 11:45:14.754253   61220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 11:45:14.762577   61220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 11:45:14.770524   61220 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 11:45:14.770565   61220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 11:45:14.778837   61220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 11:45:14.787221   61220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:45:14.878382   61220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:45:15.636977   61220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:45:15.874734   61220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:45:15.947952   61220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:45:16.036263   61220 api_server.go:52] waiting for apiserver process to appear ...
	I1104 11:45:16.036344   61220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:45:16.537291   61220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:45:17.037402   61220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:45:17.051845   61220 api_server.go:72] duration metric: took 1.015583945s to wait for apiserver process to appear ...
	I1104 11:45:17.051870   61220 api_server.go:88] waiting for apiserver healthz status ...
	I1104 11:45:17.051895   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:17.052273   61220 api_server.go:269] stopped: https://192.168.39.248:8443/healthz: Get "https://192.168.39.248:8443/healthz": dial tcp 192.168.39.248:8443: connect: connection refused
	I1104 11:45:17.552980   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:17.553605   61220 api_server.go:269] stopped: https://192.168.39.248:8443/healthz: Get "https://192.168.39.248:8443/healthz": dial tcp 192.168.39.248:8443: connect: connection refused
	I1104 11:45:18.052141   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:20.619504   61220 api_server.go:279] https://192.168.39.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 11:45:20.619537   61220 api_server.go:103] status: https://192.168.39.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 11:45:20.619553   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:20.667425   61220 api_server.go:279] https://192.168.39.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 11:45:20.667454   61220 api_server.go:103] status: https://192.168.39.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 11:45:21.052980   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:21.057757   61220 api_server.go:279] https://192.168.39.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:45:21.057781   61220 api_server.go:103] status: https://192.168.39.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:45:21.552364   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:21.559026   61220 api_server.go:279] https://192.168.39.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:45:21.559053   61220 api_server.go:103] status: https://192.168.39.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:45:22.052733   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:22.062596   61220 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I1104 11:45:22.070769   61220 api_server.go:141] control plane version: v1.24.4
	I1104 11:45:22.070798   61220 api_server.go:131] duration metric: took 5.018920192s to wait for apiserver health ...
	I1104 11:45:22.070808   61220 cni.go:84] Creating CNI manager for ""
	I1104 11:45:22.070817   61220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:45:22.072417   61220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 11:45:22.073812   61220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 11:45:22.091601   61220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 11:45:22.117776   61220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 11:45:22.117856   61220 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1104 11:45:22.117876   61220 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1104 11:45:22.130432   61220 system_pods.go:59] 7 kube-system pods found
	I1104 11:45:22.130470   61220 system_pods.go:61] "coredns-6d4b75cb6d-7n9sb" [e07948dd-2010-421c-aa97-9e41e3294264] Running
	I1104 11:45:22.130479   61220 system_pods.go:61] "etcd-test-preload-666574" [9bcf1102-8cb3-4de0-b04b-11fc7323db3d] Running
	I1104 11:45:22.130485   61220 system_pods.go:61] "kube-apiserver-test-preload-666574" [d2eeff59-58c9-4395-a9cd-5a366b7ecd44] Running
	I1104 11:45:22.130491   61220 system_pods.go:61] "kube-controller-manager-test-preload-666574" [9048d84e-dfed-4c59-89ee-6c713e1dc281] Running
	I1104 11:45:22.130501   61220 system_pods.go:61] "kube-proxy-rrdvr" [7774b925-233c-4d77-a622-433cd96a582d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 11:45:22.130516   61220 system_pods.go:61] "kube-scheduler-test-preload-666574" [6117bd71-f8a9-49f8-aed8-e652062f0e7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 11:45:22.130530   61220 system_pods.go:61] "storage-provisioner" [644163da-80a6-4ae9-a7b8-64076353f07d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 11:45:22.130545   61220 system_pods.go:74] duration metric: took 12.742269ms to wait for pod list to return data ...
	I1104 11:45:22.130558   61220 node_conditions.go:102] verifying NodePressure condition ...
	I1104 11:45:22.133664   61220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 11:45:22.133696   61220 node_conditions.go:123] node cpu capacity is 2
	I1104 11:45:22.133709   61220 node_conditions.go:105] duration metric: took 3.141895ms to run NodePressure ...
	I1104 11:45:22.133730   61220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:45:22.329192   61220 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 11:45:22.335271   61220 kubeadm.go:739] kubelet initialised
	I1104 11:45:22.335295   61220 kubeadm.go:740] duration metric: took 6.077952ms waiting for restarted kubelet to initialise ...
	I1104 11:45:22.335303   61220 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 11:45:22.341304   61220 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-7n9sb" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:22.348550   61220 pod_ready.go:98] node "test-preload-666574" hosting pod "coredns-6d4b75cb6d-7n9sb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.348571   61220 pod_ready.go:82] duration metric: took 7.241886ms for pod "coredns-6d4b75cb6d-7n9sb" in "kube-system" namespace to be "Ready" ...
	E1104 11:45:22.348580   61220 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-666574" hosting pod "coredns-6d4b75cb6d-7n9sb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.348586   61220 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:22.354421   61220 pod_ready.go:98] node "test-preload-666574" hosting pod "etcd-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.354442   61220 pod_ready.go:82] duration metric: took 5.846489ms for pod "etcd-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	E1104 11:45:22.354450   61220 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-666574" hosting pod "etcd-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.354456   61220 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:22.360361   61220 pod_ready.go:98] node "test-preload-666574" hosting pod "kube-apiserver-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.360383   61220 pod_ready.go:82] duration metric: took 5.917727ms for pod "kube-apiserver-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	E1104 11:45:22.360391   61220 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-666574" hosting pod "kube-apiserver-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.360397   61220 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:22.521719   61220 pod_ready.go:98] node "test-preload-666574" hosting pod "kube-controller-manager-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.521745   61220 pod_ready.go:82] duration metric: took 161.338551ms for pod "kube-controller-manager-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	E1104 11:45:22.521754   61220 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-666574" hosting pod "kube-controller-manager-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.521760   61220 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rrdvr" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:22.920996   61220 pod_ready.go:98] node "test-preload-666574" hosting pod "kube-proxy-rrdvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.921026   61220 pod_ready.go:82] duration metric: took 399.256969ms for pod "kube-proxy-rrdvr" in "kube-system" namespace to be "Ready" ...
	E1104 11:45:22.921037   61220 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-666574" hosting pod "kube-proxy-rrdvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:22.921046   61220 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:23.321081   61220 pod_ready.go:98] node "test-preload-666574" hosting pod "kube-scheduler-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:23.321104   61220 pod_ready.go:82] duration metric: took 400.051215ms for pod "kube-scheduler-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	E1104 11:45:23.321113   61220 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-666574" hosting pod "kube-scheduler-test-preload-666574" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:23.321121   61220 pod_ready.go:39] duration metric: took 985.810751ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 11:45:23.321138   61220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 11:45:23.331795   61220 ops.go:34] apiserver oom_adj: -16
	I1104 11:45:23.331821   61220 kubeadm.go:597] duration metric: took 8.69796699s to restartPrimaryControlPlane
	I1104 11:45:23.331831   61220 kubeadm.go:394] duration metric: took 8.746408794s to StartCluster
	I1104 11:45:23.331849   61220 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:45:23.331921   61220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:45:23.332509   61220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:45:23.332746   61220 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 11:45:23.332827   61220 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 11:45:23.332930   61220 addons.go:69] Setting storage-provisioner=true in profile "test-preload-666574"
	I1104 11:45:23.332949   61220 addons.go:234] Setting addon storage-provisioner=true in "test-preload-666574"
	W1104 11:45:23.332961   61220 addons.go:243] addon storage-provisioner should already be in state true
	I1104 11:45:23.332957   61220 addons.go:69] Setting default-storageclass=true in profile "test-preload-666574"
	I1104 11:45:23.332980   61220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-666574"
	I1104 11:45:23.332984   61220 config.go:182] Loaded profile config "test-preload-666574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1104 11:45:23.332991   61220 host.go:66] Checking if "test-preload-666574" exists ...
	I1104 11:45:23.333385   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:45:23.333421   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:45:23.333496   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:45:23.333547   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:45:23.335381   61220 out.go:177] * Verifying Kubernetes components...
	I1104 11:45:23.336728   61220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:45:23.348369   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I1104 11:45:23.348874   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:45:23.349377   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:45:23.349398   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:45:23.349678   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:45:23.350135   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:45:23.350175   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:45:23.352612   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I1104 11:45:23.353007   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:45:23.353542   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:45:23.353566   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:45:23.353886   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:45:23.354082   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetState
	I1104 11:45:23.356593   61220 kapi.go:59] client config for test-preload-666574: &rest.Config{Host:"https://192.168.39.248:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/test-preload-666574/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 11:45:23.356925   61220 addons.go:234] Setting addon default-storageclass=true in "test-preload-666574"
	W1104 11:45:23.356944   61220 addons.go:243] addon default-storageclass should already be in state true
	I1104 11:45:23.356975   61220 host.go:66] Checking if "test-preload-666574" exists ...
	I1104 11:45:23.357365   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:45:23.357407   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:45:23.366216   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I1104 11:45:23.366590   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:45:23.367074   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:45:23.367099   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:45:23.367467   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:45:23.367666   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetState
	I1104 11:45:23.369334   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:45:23.371690   61220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:45:23.373123   61220 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 11:45:23.373144   61220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 11:45:23.373163   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:45:23.373640   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I1104 11:45:23.374083   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:45:23.374582   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:45:23.374604   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:45:23.374970   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:45:23.375533   61220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:45:23.375577   61220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:45:23.376670   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:23.377167   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:45:23.377197   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:23.377316   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:45:23.377505   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:45:23.377711   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:45:23.377859   61220 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa Username:docker}
	I1104 11:45:23.420321   61220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I1104 11:45:23.420887   61220 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:45:23.421426   61220 main.go:141] libmachine: Using API Version  1
	I1104 11:45:23.421457   61220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:45:23.421784   61220 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:45:23.422053   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetState
	I1104 11:45:23.423819   61220 main.go:141] libmachine: (test-preload-666574) Calling .DriverName
	I1104 11:45:23.424009   61220 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 11:45:23.424022   61220 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 11:45:23.424035   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHHostname
	I1104 11:45:23.426942   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:23.427434   61220 main.go:141] libmachine: (test-preload-666574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:95:f8", ip: ""} in network mk-test-preload-666574: {Iface:virbr1 ExpiryTime:2024-11-04 12:44:50 +0000 UTC Type:0 Mac:52:54:00:71:95:f8 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:test-preload-666574 Clientid:01:52:54:00:71:95:f8}
	I1104 11:45:23.427461   61220 main.go:141] libmachine: (test-preload-666574) DBG | domain test-preload-666574 has defined IP address 192.168.39.248 and MAC address 52:54:00:71:95:f8 in network mk-test-preload-666574
	I1104 11:45:23.427625   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHPort
	I1104 11:45:23.427779   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHKeyPath
	I1104 11:45:23.427949   61220 main.go:141] libmachine: (test-preload-666574) Calling .GetSSHUsername
	I1104 11:45:23.428081   61220 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/test-preload-666574/id_rsa Username:docker}
	I1104 11:45:23.522051   61220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:45:23.538457   61220 node_ready.go:35] waiting up to 6m0s for node "test-preload-666574" to be "Ready" ...
	I1104 11:45:23.614248   61220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 11:45:23.656082   61220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 11:45:24.543203   61220 main.go:141] libmachine: Making call to close driver server
	I1104 11:45:24.543232   61220 main.go:141] libmachine: (test-preload-666574) Calling .Close
	I1104 11:45:24.543358   61220 main.go:141] libmachine: Making call to close driver server
	I1104 11:45:24.543378   61220 main.go:141] libmachine: (test-preload-666574) Calling .Close
	I1104 11:45:24.543552   61220 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:45:24.543563   61220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:45:24.543571   61220 main.go:141] libmachine: Making call to close driver server
	I1104 11:45:24.543577   61220 main.go:141] libmachine: (test-preload-666574) Calling .Close
	I1104 11:45:24.543646   61220 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:45:24.543658   61220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:45:24.543665   61220 main.go:141] libmachine: Making call to close driver server
	I1104 11:45:24.543672   61220 main.go:141] libmachine: (test-preload-666574) Calling .Close
	I1104 11:45:24.543954   61220 main.go:141] libmachine: (test-preload-666574) DBG | Closing plugin on server side
	I1104 11:45:24.543954   61220 main.go:141] libmachine: (test-preload-666574) DBG | Closing plugin on server side
	I1104 11:45:24.543971   61220 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:45:24.543983   61220 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:45:24.543991   61220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:45:24.543998   61220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:45:24.555836   61220 main.go:141] libmachine: Making call to close driver server
	I1104 11:45:24.555851   61220 main.go:141] libmachine: (test-preload-666574) Calling .Close
	I1104 11:45:24.556109   61220 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:45:24.556127   61220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:45:24.557828   61220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1104 11:45:24.559256   61220 addons.go:510] duration metric: took 1.22645598s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1104 11:45:25.542006   61220 node_ready.go:53] node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:27.545534   61220 node_ready.go:53] node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:30.042495   61220 node_ready.go:53] node "test-preload-666574" has status "Ready":"False"
	I1104 11:45:31.041684   61220 node_ready.go:49] node "test-preload-666574" has status "Ready":"True"
	I1104 11:45:31.041710   61220 node_ready.go:38] duration metric: took 7.503223834s for node "test-preload-666574" to be "Ready" ...
	I1104 11:45:31.041719   61220 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 11:45:31.046148   61220 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7n9sb" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:31.049851   61220 pod_ready.go:93] pod "coredns-6d4b75cb6d-7n9sb" in "kube-system" namespace has status "Ready":"True"
	I1104 11:45:31.049867   61220 pod_ready.go:82] duration metric: took 3.698832ms for pod "coredns-6d4b75cb6d-7n9sb" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:31.049875   61220 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.056027   61220 pod_ready.go:93] pod "etcd-test-preload-666574" in "kube-system" namespace has status "Ready":"True"
	I1104 11:45:32.056047   61220 pod_ready.go:82] duration metric: took 1.006166299s for pod "etcd-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.056055   61220 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.060825   61220 pod_ready.go:93] pod "kube-apiserver-test-preload-666574" in "kube-system" namespace has status "Ready":"True"
	I1104 11:45:32.060848   61220 pod_ready.go:82] duration metric: took 4.785877ms for pod "kube-apiserver-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.060856   61220 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.064880   61220 pod_ready.go:93] pod "kube-controller-manager-test-preload-666574" in "kube-system" namespace has status "Ready":"True"
	I1104 11:45:32.064899   61220 pod_ready.go:82] duration metric: took 4.03653ms for pod "kube-controller-manager-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.064906   61220 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rrdvr" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.242653   61220 pod_ready.go:93] pod "kube-proxy-rrdvr" in "kube-system" namespace has status "Ready":"True"
	I1104 11:45:32.242676   61220 pod_ready.go:82] duration metric: took 177.763223ms for pod "kube-proxy-rrdvr" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.242685   61220 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.642624   61220 pod_ready.go:93] pod "kube-scheduler-test-preload-666574" in "kube-system" namespace has status "Ready":"True"
	I1104 11:45:32.642647   61220 pod_ready.go:82] duration metric: took 399.955392ms for pod "kube-scheduler-test-preload-666574" in "kube-system" namespace to be "Ready" ...
	I1104 11:45:32.642656   61220 pod_ready.go:39] duration metric: took 1.600929769s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 11:45:32.642669   61220 api_server.go:52] waiting for apiserver process to appear ...
	I1104 11:45:32.642715   61220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:45:32.655921   61220 api_server.go:72] duration metric: took 9.323140996s to wait for apiserver process to appear ...
	I1104 11:45:32.655941   61220 api_server.go:88] waiting for apiserver healthz status ...
	I1104 11:45:32.655965   61220 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1104 11:45:32.660760   61220 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I1104 11:45:32.661748   61220 api_server.go:141] control plane version: v1.24.4
	I1104 11:45:32.661766   61220 api_server.go:131] duration metric: took 5.818918ms to wait for apiserver health ...
	I1104 11:45:32.661773   61220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 11:45:32.843951   61220 system_pods.go:59] 7 kube-system pods found
	I1104 11:45:32.843977   61220 system_pods.go:61] "coredns-6d4b75cb6d-7n9sb" [e07948dd-2010-421c-aa97-9e41e3294264] Running
	I1104 11:45:32.843981   61220 system_pods.go:61] "etcd-test-preload-666574" [9bcf1102-8cb3-4de0-b04b-11fc7323db3d] Running
	I1104 11:45:32.843985   61220 system_pods.go:61] "kube-apiserver-test-preload-666574" [d2eeff59-58c9-4395-a9cd-5a366b7ecd44] Running
	I1104 11:45:32.843994   61220 system_pods.go:61] "kube-controller-manager-test-preload-666574" [9048d84e-dfed-4c59-89ee-6c713e1dc281] Running
	I1104 11:45:32.843997   61220 system_pods.go:61] "kube-proxy-rrdvr" [7774b925-233c-4d77-a622-433cd96a582d] Running
	I1104 11:45:32.844000   61220 system_pods.go:61] "kube-scheduler-test-preload-666574" [6117bd71-f8a9-49f8-aed8-e652062f0e7e] Running
	I1104 11:45:32.844003   61220 system_pods.go:61] "storage-provisioner" [644163da-80a6-4ae9-a7b8-64076353f07d] Running
	I1104 11:45:32.844009   61220 system_pods.go:74] duration metric: took 182.230406ms to wait for pod list to return data ...
	I1104 11:45:32.844015   61220 default_sa.go:34] waiting for default service account to be created ...
	I1104 11:45:33.041975   61220 default_sa.go:45] found service account: "default"
	I1104 11:45:33.041997   61220 default_sa.go:55] duration metric: took 197.977422ms for default service account to be created ...
	I1104 11:45:33.042004   61220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 11:45:33.245209   61220 system_pods.go:86] 7 kube-system pods found
	I1104 11:45:33.245249   61220 system_pods.go:89] "coredns-6d4b75cb6d-7n9sb" [e07948dd-2010-421c-aa97-9e41e3294264] Running
	I1104 11:45:33.245258   61220 system_pods.go:89] "etcd-test-preload-666574" [9bcf1102-8cb3-4de0-b04b-11fc7323db3d] Running
	I1104 11:45:33.245264   61220 system_pods.go:89] "kube-apiserver-test-preload-666574" [d2eeff59-58c9-4395-a9cd-5a366b7ecd44] Running
	I1104 11:45:33.245270   61220 system_pods.go:89] "kube-controller-manager-test-preload-666574" [9048d84e-dfed-4c59-89ee-6c713e1dc281] Running
	I1104 11:45:33.245275   61220 system_pods.go:89] "kube-proxy-rrdvr" [7774b925-233c-4d77-a622-433cd96a582d] Running
	I1104 11:45:33.245280   61220 system_pods.go:89] "kube-scheduler-test-preload-666574" [6117bd71-f8a9-49f8-aed8-e652062f0e7e] Running
	I1104 11:45:33.245284   61220 system_pods.go:89] "storage-provisioner" [644163da-80a6-4ae9-a7b8-64076353f07d] Running
	I1104 11:45:33.245294   61220 system_pods.go:126] duration metric: took 203.283607ms to wait for k8s-apps to be running ...
	I1104 11:45:33.245307   61220 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 11:45:33.245356   61220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:45:33.259155   61220 system_svc.go:56] duration metric: took 13.841214ms WaitForService to wait for kubelet
	I1104 11:45:33.259194   61220 kubeadm.go:582] duration metric: took 9.926407733s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:45:33.259216   61220 node_conditions.go:102] verifying NodePressure condition ...
	I1104 11:45:33.442184   61220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 11:45:33.442212   61220 node_conditions.go:123] node cpu capacity is 2
	I1104 11:45:33.442225   61220 node_conditions.go:105] duration metric: took 183.003295ms to run NodePressure ...
	I1104 11:45:33.442239   61220 start.go:241] waiting for startup goroutines ...
	I1104 11:45:33.442249   61220 start.go:246] waiting for cluster config update ...
	I1104 11:45:33.442259   61220 start.go:255] writing updated cluster config ...
	I1104 11:45:33.442564   61220 ssh_runner.go:195] Run: rm -f paused
	I1104 11:45:33.487486   61220 start.go:600] kubectl: 1.31.2, cluster: 1.24.4 (minor skew: 7)
	I1104 11:45:33.489418   61220 out.go:201] 
	W1104 11:45:33.490752   61220 out.go:270] ! /usr/local/bin/kubectl is version 1.31.2, which may have incompatibilities with Kubernetes 1.24.4.
	I1104 11:45:33.492212   61220 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1104 11:45:33.493669   61220 out.go:177] * Done! kubectl is now configured to use "test-preload-666574" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.359228144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730720734359138496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64701a7d-20f5-4da5-a7bc-3672df882f5d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.359874697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b30f496b-fba3-4592-b88c-98bdf2d97d7d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.359923663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b30f496b-fba3-4592-b88c-98bdf2d97d7d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.360075170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee68b1f746037a81517a58a2ad8d54d1ebea5c56f7df49b6250e8c26702d170b,PodSandboxId:a1e6dc1b942f92a9364a2be4c9b3f63c33d93e3b49c9f791db922696e8071914,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730720729057250166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7n9sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07948dd-2010-421c-aa97-9e41e3294264,},Annotations:map[string]string{io.kubernetes.container.hash: 84c3a29e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d111ec953f3c5e21a231124bf500f4266a02c4799988fc2dbeb3c13bc89a796,PodSandboxId:2799a3aa5458a51ee104c235fe1df926f672c554431ecbf80203cc7fbe544069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730720721954958880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 644163da-80a6-4ae9-a7b8-64076353f07d,},Annotations:map[string]string{io.kubernetes.container.hash: b4729340,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9e05d27c7f99c33131bb23973e6a018cb036fa78dd7b258ae383364f104569,PodSandboxId:231430c9d0966cc35046196c59db301769d8096492075e7057ac25bc1ba386bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730720721665118712,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rrdvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
74b925-233c-4d77-a622-433cd96a582d,},Annotations:map[string]string{io.kubernetes.container.hash: 5e3f2f65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b60b09a191f75bd8774dfc39ce30c39f6668ed8bfe26c9a53be7381b1c96ff0,PodSandboxId:5de6b0eeb076e3e5945f88949254910fd58b9eb081883889fd13558286807162,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730720716645217339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5d74e7c
a2c85b28c7a36c4a75bcaec,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e972f7f27c890856ac8e954120583f829628c022cc0ed75b26855a051b40c9,PodSandboxId:d5bae07843decc76f375dfd27165c6a9adccf6edfd9cb08b14bbc1b333022fa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730720716676950241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddb12fe88676e1b7b9c29bd9a3ded423,},Annotations:map
[string]string{io.kubernetes.container.hash: 2e54a005,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b92f94056de58166734b5c014cfce9e3df711f6ba603fbfbba03d5d7a827b0,PodSandboxId:74cf601c24e59ccfebbb1ff293173c116aa4c4ef1be0eebc5f25d79d9c857845,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730720716679727480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4722debba3f11922d694f65e77ae06,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c3dd6f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18cf3d54ce1a059e812c3c0dc2efb21be7e6526265178e12f14d7b2db9f400e,PodSandboxId:1ae4d2376a2e23c8d3d3f35623920b0271e4b17caf54a1f7ff0868d9a5f8319e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730720716619750226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6b914663d2e88932b9b3cc3f36d5138,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b30f496b-fba3-4592-b88c-98bdf2d97d7d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.394103506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3175db3-2f14-4ca3-ba1c-6f5756826c0b name=/runtime.v1.RuntimeService/Version
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.394266376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3175db3-2f14-4ca3-ba1c-6f5756826c0b name=/runtime.v1.RuntimeService/Version
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.395590426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5586dffc-8a83-4378-9052-544d76267b4c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.395994356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730720734395973887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5586dffc-8a83-4378-9052-544d76267b4c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.396548684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0b0a75e-3cad-4f81-b3cf-806496b14117 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.396599123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0b0a75e-3cad-4f81-b3cf-806496b14117 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.396785162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee68b1f746037a81517a58a2ad8d54d1ebea5c56f7df49b6250e8c26702d170b,PodSandboxId:a1e6dc1b942f92a9364a2be4c9b3f63c33d93e3b49c9f791db922696e8071914,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730720729057250166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7n9sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07948dd-2010-421c-aa97-9e41e3294264,},Annotations:map[string]string{io.kubernetes.container.hash: 84c3a29e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d111ec953f3c5e21a231124bf500f4266a02c4799988fc2dbeb3c13bc89a796,PodSandboxId:2799a3aa5458a51ee104c235fe1df926f672c554431ecbf80203cc7fbe544069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730720721954958880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 644163da-80a6-4ae9-a7b8-64076353f07d,},Annotations:map[string]string{io.kubernetes.container.hash: b4729340,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9e05d27c7f99c33131bb23973e6a018cb036fa78dd7b258ae383364f104569,PodSandboxId:231430c9d0966cc35046196c59db301769d8096492075e7057ac25bc1ba386bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730720721665118712,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rrdvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
74b925-233c-4d77-a622-433cd96a582d,},Annotations:map[string]string{io.kubernetes.container.hash: 5e3f2f65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b60b09a191f75bd8774dfc39ce30c39f6668ed8bfe26c9a53be7381b1c96ff0,PodSandboxId:5de6b0eeb076e3e5945f88949254910fd58b9eb081883889fd13558286807162,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730720716645217339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5d74e7c
a2c85b28c7a36c4a75bcaec,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e972f7f27c890856ac8e954120583f829628c022cc0ed75b26855a051b40c9,PodSandboxId:d5bae07843decc76f375dfd27165c6a9adccf6edfd9cb08b14bbc1b333022fa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730720716676950241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddb12fe88676e1b7b9c29bd9a3ded423,},Annotations:map
[string]string{io.kubernetes.container.hash: 2e54a005,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b92f94056de58166734b5c014cfce9e3df711f6ba603fbfbba03d5d7a827b0,PodSandboxId:74cf601c24e59ccfebbb1ff293173c116aa4c4ef1be0eebc5f25d79d9c857845,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730720716679727480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4722debba3f11922d694f65e77ae06,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c3dd6f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18cf3d54ce1a059e812c3c0dc2efb21be7e6526265178e12f14d7b2db9f400e,PodSandboxId:1ae4d2376a2e23c8d3d3f35623920b0271e4b17caf54a1f7ff0868d9a5f8319e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730720716619750226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6b914663d2e88932b9b3cc3f36d5138,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0b0a75e-3cad-4f81-b3cf-806496b14117 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.434452265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d1fc73a-2f86-4b34-850c-58c6f3898cba name=/runtime.v1.RuntimeService/Version
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.434528336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d1fc73a-2f86-4b34-850c-58c6f3898cba name=/runtime.v1.RuntimeService/Version
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.435707683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c776d8c5-c9bf-46af-9171-e393101295f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.436117729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730720734436095908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c776d8c5-c9bf-46af-9171-e393101295f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.436818517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fcb255f-7b41-47bc-ac21-3dd8fac0aa31 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.436878752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fcb255f-7b41-47bc-ac21-3dd8fac0aa31 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.437038742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee68b1f746037a81517a58a2ad8d54d1ebea5c56f7df49b6250e8c26702d170b,PodSandboxId:a1e6dc1b942f92a9364a2be4c9b3f63c33d93e3b49c9f791db922696e8071914,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730720729057250166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7n9sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07948dd-2010-421c-aa97-9e41e3294264,},Annotations:map[string]string{io.kubernetes.container.hash: 84c3a29e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d111ec953f3c5e21a231124bf500f4266a02c4799988fc2dbeb3c13bc89a796,PodSandboxId:2799a3aa5458a51ee104c235fe1df926f672c554431ecbf80203cc7fbe544069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730720721954958880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 644163da-80a6-4ae9-a7b8-64076353f07d,},Annotations:map[string]string{io.kubernetes.container.hash: b4729340,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9e05d27c7f99c33131bb23973e6a018cb036fa78dd7b258ae383364f104569,PodSandboxId:231430c9d0966cc35046196c59db301769d8096492075e7057ac25bc1ba386bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730720721665118712,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rrdvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
74b925-233c-4d77-a622-433cd96a582d,},Annotations:map[string]string{io.kubernetes.container.hash: 5e3f2f65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b60b09a191f75bd8774dfc39ce30c39f6668ed8bfe26c9a53be7381b1c96ff0,PodSandboxId:5de6b0eeb076e3e5945f88949254910fd58b9eb081883889fd13558286807162,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730720716645217339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5d74e7c
a2c85b28c7a36c4a75bcaec,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e972f7f27c890856ac8e954120583f829628c022cc0ed75b26855a051b40c9,PodSandboxId:d5bae07843decc76f375dfd27165c6a9adccf6edfd9cb08b14bbc1b333022fa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730720716676950241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddb12fe88676e1b7b9c29bd9a3ded423,},Annotations:map
[string]string{io.kubernetes.container.hash: 2e54a005,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b92f94056de58166734b5c014cfce9e3df711f6ba603fbfbba03d5d7a827b0,PodSandboxId:74cf601c24e59ccfebbb1ff293173c116aa4c4ef1be0eebc5f25d79d9c857845,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730720716679727480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4722debba3f11922d694f65e77ae06,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c3dd6f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18cf3d54ce1a059e812c3c0dc2efb21be7e6526265178e12f14d7b2db9f400e,PodSandboxId:1ae4d2376a2e23c8d3d3f35623920b0271e4b17caf54a1f7ff0868d9a5f8319e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730720716619750226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6b914663d2e88932b9b3cc3f36d5138,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fcb255f-7b41-47bc-ac21-3dd8fac0aa31 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.470972850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6937d60-026f-4751-8079-85801a8dab83 name=/runtime.v1.RuntimeService/Version
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.471044698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6937d60-026f-4751-8079-85801a8dab83 name=/runtime.v1.RuntimeService/Version
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.472497785Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2176f5cb-0794-45c9-91f8-1d24195859fa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.472927028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730720734472902954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2176f5cb-0794-45c9-91f8-1d24195859fa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.473497825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf3f32e5-f739-4467-b603-441908617570 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.473550085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf3f32e5-f739-4467-b603-441908617570 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:45:34 test-preload-666574 crio[662]: time="2024-11-04 11:45:34.473710517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee68b1f746037a81517a58a2ad8d54d1ebea5c56f7df49b6250e8c26702d170b,PodSandboxId:a1e6dc1b942f92a9364a2be4c9b3f63c33d93e3b49c9f791db922696e8071914,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730720729057250166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7n9sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07948dd-2010-421c-aa97-9e41e3294264,},Annotations:map[string]string{io.kubernetes.container.hash: 84c3a29e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d111ec953f3c5e21a231124bf500f4266a02c4799988fc2dbeb3c13bc89a796,PodSandboxId:2799a3aa5458a51ee104c235fe1df926f672c554431ecbf80203cc7fbe544069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730720721954958880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 644163da-80a6-4ae9-a7b8-64076353f07d,},Annotations:map[string]string{io.kubernetes.container.hash: b4729340,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9e05d27c7f99c33131bb23973e6a018cb036fa78dd7b258ae383364f104569,PodSandboxId:231430c9d0966cc35046196c59db301769d8096492075e7057ac25bc1ba386bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730720721665118712,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rrdvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
74b925-233c-4d77-a622-433cd96a582d,},Annotations:map[string]string{io.kubernetes.container.hash: 5e3f2f65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b60b09a191f75bd8774dfc39ce30c39f6668ed8bfe26c9a53be7381b1c96ff0,PodSandboxId:5de6b0eeb076e3e5945f88949254910fd58b9eb081883889fd13558286807162,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730720716645217339,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5d74e7c
a2c85b28c7a36c4a75bcaec,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e972f7f27c890856ac8e954120583f829628c022cc0ed75b26855a051b40c9,PodSandboxId:d5bae07843decc76f375dfd27165c6a9adccf6edfd9cb08b14bbc1b333022fa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730720716676950241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddb12fe88676e1b7b9c29bd9a3ded423,},Annotations:map
[string]string{io.kubernetes.container.hash: 2e54a005,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b92f94056de58166734b5c014cfce9e3df711f6ba603fbfbba03d5d7a827b0,PodSandboxId:74cf601c24e59ccfebbb1ff293173c116aa4c4ef1be0eebc5f25d79d9c857845,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730720716679727480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4722debba3f11922d694f65e77ae06,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c3dd6f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18cf3d54ce1a059e812c3c0dc2efb21be7e6526265178e12f14d7b2db9f400e,PodSandboxId:1ae4d2376a2e23c8d3d3f35623920b0271e4b17caf54a1f7ff0868d9a5f8319e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730720716619750226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-666574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6b914663d2e88932b9b3cc3f36d5138,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf3f32e5-f739-4467-b603-441908617570 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee68b1f746037       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   a1e6dc1b942f9       coredns-6d4b75cb6d-7n9sb
	7d111ec953f3c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   2799a3aa5458a       storage-provisioner
	cb9e05d27c7f9       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   231430c9d0966       kube-proxy-rrdvr
	11b92f94056de       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   74cf601c24e59       kube-apiserver-test-preload-666574
	74e972f7f27c8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   d5bae07843dec       etcd-test-preload-666574
	1b60b09a191f7       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   5de6b0eeb076e       kube-scheduler-test-preload-666574
	c18cf3d54ce1a       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   1ae4d2376a2e2       kube-controller-manager-test-preload-666574
	
	
	==> coredns [ee68b1f746037a81517a58a2ad8d54d1ebea5c56f7df49b6250e8c26702d170b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38672 - 23298 "HINFO IN 8583217011892210098.6212038310575182709. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024248738s
	
	
	==> describe nodes <==
	Name:               test-preload-666574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-666574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=test-preload-666574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T11_43_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 11:43:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-666574
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 11:45:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 11:45:30 +0000   Mon, 04 Nov 2024 11:43:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 11:45:30 +0000   Mon, 04 Nov 2024 11:43:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 11:45:30 +0000   Mon, 04 Nov 2024 11:43:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 11:45:30 +0000   Mon, 04 Nov 2024 11:45:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    test-preload-666574
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b31418f91b54b9a88cb07cd4926044d
	  System UUID:                6b31418f-91b5-4b9a-88cb-07cd4926044d
	  Boot ID:                    6b381696-db42-4e93-9262-7d6dc083aad1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7n9sb                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     83s
	  kube-system                 etcd-test-preload-666574                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         95s
	  kube-system                 kube-apiserver-test-preload-666574             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-test-preload-666574    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-rrdvr                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-test-preload-666574             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node test-preload-666574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node test-preload-666574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node test-preload-666574 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                kubelet          Node test-preload-666574 status is now: NodeReady
	  Normal  RegisteredNode           84s                node-controller  Node test-preload-666574 event: Registered Node test-preload-666574 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18s (x8 over 19s)  kubelet          Node test-preload-666574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 19s)  kubelet          Node test-preload-666574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 19s)  kubelet          Node test-preload-666574 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-666574 event: Registered Node test-preload-666574 in Controller
	
	
	==> dmesg <==
	[Nov 4 11:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047323] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035034] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777528] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.845981] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.507642] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov 4 11:45] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.058236] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056612] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.165837] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.138651] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.264386] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[ +12.864321] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +0.054346] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.712022] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +5.161989] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.427464] systemd-fstab-generator[1730]: Ignoring "noauto" option for root device
	[  +5.507643] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [74e972f7f27c890856ac8e954120583f829628c022cc0ed75b26855a051b40c9] <==
	{"level":"info","ts":"2024-11-04T11:45:16.904Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"1aa4f7d85b49255a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-11-04T11:45:16.905Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-11-04T11:45:16.905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a switched to configuration voters=(1919931849783190874)"}
	{"level":"info","ts":"2024-11-04T11:45:16.907Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","added-peer-id":"1aa4f7d85b49255a","added-peer-peer-urls":["https://192.168.39.248:2380"]}
	{"level":"info","ts":"2024-11-04T11:45:16.907Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-04T11:45:16.907Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-04T11:45:16.909Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-11-04T11:45:16.909Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1aa4f7d85b49255a","initial-advertise-peer-urls":["https://192.168.39.248:2380"],"listen-peer-urls":["https://192.168.39.248:2380"],"advertise-client-urls":["https://192.168.39.248:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.248:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-11-04T11:45:16.909Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-11-04T11:45:16.910Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-11-04T11:45:16.910Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-11-04T11:45:18.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a is starting a new election at term 2"}
	{"level":"info","ts":"2024-11-04T11:45:18.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-04T11:45:18.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a received MsgPreVoteResp from 1aa4f7d85b49255a at term 2"}
	{"level":"info","ts":"2024-11-04T11:45:18.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became candidate at term 3"}
	{"level":"info","ts":"2024-11-04T11:45:18.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a received MsgVoteResp from 1aa4f7d85b49255a at term 3"}
	{"level":"info","ts":"2024-11-04T11:45:18.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became leader at term 3"}
	{"level":"info","ts":"2024-11-04T11:45:18.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1aa4f7d85b49255a elected leader 1aa4f7d85b49255a at term 3"}
	{"level":"info","ts":"2024-11-04T11:45:18.288Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"1aa4f7d85b49255a","local-member-attributes":"{Name:test-preload-666574 ClientURLs:[https://192.168.39.248:2379]}","request-path":"/0/members/1aa4f7d85b49255a/attributes","cluster-id":"ffc6a57a6de49e73","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-04T11:45:18.289Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T11:45:18.289Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T11:45:18.290Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-04T11:45:18.291Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-04T11:45:18.291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-04T11:45:18.298Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.248:2379"}
	
	
	==> kernel <==
	 11:45:34 up 0 min,  0 users,  load average: 0.88, 0.25, 0.08
	Linux test-preload-666574 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [11b92f94056de58166734b5c014cfce9e3df711f6ba603fbfbba03d5d7a827b0] <==
	I1104 11:45:20.538953       1 controller.go:85] Starting OpenAPI V3 controller
	I1104 11:45:20.538999       1 naming_controller.go:291] Starting NamingConditionController
	I1104 11:45:20.539697       1 establishing_controller.go:76] Starting EstablishingController
	I1104 11:45:20.539713       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1104 11:45:20.539726       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1104 11:45:20.539734       1 crd_finalizer.go:266] Starting CRDFinalizer
	E1104 11:45:20.640428       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1104 11:45:20.657970       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1104 11:45:20.660049       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1104 11:45:20.696568       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1104 11:45:20.697758       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1104 11:45:20.728349       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1104 11:45:20.728364       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1104 11:45:20.728495       1 cache.go:39] Caches are synced for autoregister controller
	I1104 11:45:20.745200       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1104 11:45:21.198753       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1104 11:45:21.536613       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1104 11:45:21.981993       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1104 11:45:22.210755       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1104 11:45:22.225001       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1104 11:45:22.265658       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1104 11:45:22.281535       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1104 11:45:22.288005       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1104 11:45:33.732974       1 controller.go:611] quota admission added evaluator for: endpoints
	I1104 11:45:33.808821       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c18cf3d54ce1a059e812c3c0dc2efb21be7e6526265178e12f14d7b2db9f400e] <==
	I1104 11:45:33.779627       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1104 11:45:33.781923       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1104 11:45:33.786735       1 shared_informer.go:262] Caches are synced for ephemeral
	I1104 11:45:33.787989       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1104 11:45:33.789248       1 shared_informer.go:262] Caches are synced for daemon sets
	I1104 11:45:33.792824       1 shared_informer.go:262] Caches are synced for TTL
	I1104 11:45:33.792957       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1104 11:45:33.794214       1 shared_informer.go:262] Caches are synced for GC
	I1104 11:45:33.795316       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1104 11:45:33.796460       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1104 11:45:33.798739       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1104 11:45:33.823015       1 shared_informer.go:262] Caches are synced for stateful set
	I1104 11:45:33.830581       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1104 11:45:33.876209       1 shared_informer.go:262] Caches are synced for persistent volume
	I1104 11:45:33.878710       1 shared_informer.go:262] Caches are synced for attach detach
	I1104 11:45:33.894318       1 shared_informer.go:262] Caches are synced for PV protection
	I1104 11:45:33.928136       1 shared_informer.go:262] Caches are synced for expand
	I1104 11:45:33.933476       1 shared_informer.go:262] Caches are synced for resource quota
	I1104 11:45:33.943793       1 shared_informer.go:262] Caches are synced for disruption
	I1104 11:45:33.943808       1 disruption.go:371] Sending events to api server.
	I1104 11:45:33.977989       1 shared_informer.go:262] Caches are synced for resource quota
	I1104 11:45:33.991620       1 shared_informer.go:262] Caches are synced for deployment
	I1104 11:45:34.419088       1 shared_informer.go:262] Caches are synced for garbage collector
	I1104 11:45:34.419116       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1104 11:45:34.423743       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [cb9e05d27c7f99c33131bb23973e6a018cb036fa78dd7b258ae383364f104569] <==
	I1104 11:45:21.905265       1 node.go:163] Successfully retrieved node IP: 192.168.39.248
	I1104 11:45:21.909713       1 server_others.go:138] "Detected node IP" address="192.168.39.248"
	I1104 11:45:21.911317       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1104 11:45:21.972346       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1104 11:45:21.972445       1 server_others.go:206] "Using iptables Proxier"
	I1104 11:45:21.973280       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1104 11:45:21.974498       1 server.go:661] "Version info" version="v1.24.4"
	I1104 11:45:21.974545       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 11:45:21.976009       1 config.go:317] "Starting service config controller"
	I1104 11:45:21.976629       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1104 11:45:21.976695       1 config.go:226] "Starting endpoint slice config controller"
	I1104 11:45:21.976713       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1104 11:45:21.977631       1 config.go:444] "Starting node config controller"
	I1104 11:45:21.978040       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1104 11:45:22.077749       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1104 11:45:22.077795       1 shared_informer.go:262] Caches are synced for service config
	I1104 11:45:22.078255       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [1b60b09a191f75bd8774dfc39ce30c39f6668ed8bfe26c9a53be7381b1c96ff0] <==
	I1104 11:45:17.429003       1 serving.go:348] Generated self-signed cert in-memory
	W1104 11:45:20.613219       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 11:45:20.613415       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 11:45:20.613444       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 11:45:20.613485       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 11:45:20.649897       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1104 11:45:20.650013       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 11:45:20.656803       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1104 11:45:20.657248       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 11:45:20.658389       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 11:45:20.657849       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1104 11:45:20.759407       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 11:45:20 test-preload-666574 kubelet[1117]: I1104 11:45:20.949620    1117 apiserver.go:52] "Watching apiserver"
	Nov 04 11:45:20 test-preload-666574 kubelet[1117]: I1104 11:45:20.958289    1117 topology_manager.go:200] "Topology Admit Handler"
	Nov 04 11:45:20 test-preload-666574 kubelet[1117]: I1104 11:45:20.958375    1117 topology_manager.go:200] "Topology Admit Handler"
	Nov 04 11:45:20 test-preload-666574 kubelet[1117]: I1104 11:45:20.958406    1117 topology_manager.go:200] "Topology Admit Handler"
	Nov 04 11:45:20 test-preload-666574 kubelet[1117]: E1104 11:45:20.961840    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7n9sb" podUID=e07948dd-2010-421c-aa97-9e41e3294264
	Nov 04 11:45:20 test-preload-666574 kubelet[1117]: E1104 11:45:20.984919    1117 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.084751    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume\") pod \"coredns-6d4b75cb6d-7n9sb\" (UID: \"e07948dd-2010-421c-aa97-9e41e3294264\") " pod="kube-system/coredns-6d4b75cb6d-7n9sb"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.084878    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7774b925-233c-4d77-a622-433cd96a582d-xtables-lock\") pod \"kube-proxy-rrdvr\" (UID: \"7774b925-233c-4d77-a622-433cd96a582d\") " pod="kube-system/kube-proxy-rrdvr"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.084950    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br25w\" (UniqueName: \"kubernetes.io/projected/e07948dd-2010-421c-aa97-9e41e3294264-kube-api-access-br25w\") pod \"coredns-6d4b75cb6d-7n9sb\" (UID: \"e07948dd-2010-421c-aa97-9e41e3294264\") " pod="kube-system/coredns-6d4b75cb6d-7n9sb"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.085007    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7774b925-233c-4d77-a622-433cd96a582d-lib-modules\") pod \"kube-proxy-rrdvr\" (UID: \"7774b925-233c-4d77-a622-433cd96a582d\") " pod="kube-system/kube-proxy-rrdvr"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.085060    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d49ld\" (UniqueName: \"kubernetes.io/projected/644163da-80a6-4ae9-a7b8-64076353f07d-kube-api-access-d49ld\") pod \"storage-provisioner\" (UID: \"644163da-80a6-4ae9-a7b8-64076353f07d\") " pod="kube-system/storage-provisioner"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.085282    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7774b925-233c-4d77-a622-433cd96a582d-kube-proxy\") pod \"kube-proxy-rrdvr\" (UID: \"7774b925-233c-4d77-a622-433cd96a582d\") " pod="kube-system/kube-proxy-rrdvr"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.085392    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shw47\" (UniqueName: \"kubernetes.io/projected/7774b925-233c-4d77-a622-433cd96a582d-kube-api-access-shw47\") pod \"kube-proxy-rrdvr\" (UID: \"7774b925-233c-4d77-a622-433cd96a582d\") " pod="kube-system/kube-proxy-rrdvr"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.085450    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/644163da-80a6-4ae9-a7b8-64076353f07d-tmp\") pod \"storage-provisioner\" (UID: \"644163da-80a6-4ae9-a7b8-64076353f07d\") " pod="kube-system/storage-provisioner"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: I1104 11:45:21.085477    1117 reconciler.go:159] "Reconciler: start to sync state"
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: E1104 11:45:21.189778    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: E1104 11:45:21.189886    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume podName:e07948dd-2010-421c-aa97-9e41e3294264 nodeName:}" failed. No retries permitted until 2024-11-04 11:45:21.689863472 +0000 UTC m=+5.846643384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume") pod "coredns-6d4b75cb6d-7n9sb" (UID: "e07948dd-2010-421c-aa97-9e41e3294264") : object "kube-system"/"coredns" not registered
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: E1104 11:45:21.694407    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 04 11:45:21 test-preload-666574 kubelet[1117]: E1104 11:45:21.694469    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume podName:e07948dd-2010-421c-aa97-9e41e3294264 nodeName:}" failed. No retries permitted until 2024-11-04 11:45:22.694455999 +0000 UTC m=+6.851235897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume") pod "coredns-6d4b75cb6d-7n9sb" (UID: "e07948dd-2010-421c-aa97-9e41e3294264") : object "kube-system"/"coredns" not registered
	Nov 04 11:45:22 test-preload-666574 kubelet[1117]: E1104 11:45:22.701619    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 04 11:45:22 test-preload-666574 kubelet[1117]: E1104 11:45:22.702226    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume podName:e07948dd-2010-421c-aa97-9e41e3294264 nodeName:}" failed. No retries permitted until 2024-11-04 11:45:24.702196956 +0000 UTC m=+8.858976878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume") pod "coredns-6d4b75cb6d-7n9sb" (UID: "e07948dd-2010-421c-aa97-9e41e3294264") : object "kube-system"/"coredns" not registered
	Nov 04 11:45:23 test-preload-666574 kubelet[1117]: E1104 11:45:23.038238    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7n9sb" podUID=e07948dd-2010-421c-aa97-9e41e3294264
	Nov 04 11:45:24 test-preload-666574 kubelet[1117]: E1104 11:45:24.721467    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 04 11:45:24 test-preload-666574 kubelet[1117]: E1104 11:45:24.721964    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume podName:e07948dd-2010-421c-aa97-9e41e3294264 nodeName:}" failed. No retries permitted until 2024-11-04 11:45:28.721940321 +0000 UTC m=+12.878720220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e07948dd-2010-421c-aa97-9e41e3294264-config-volume") pod "coredns-6d4b75cb6d-7n9sb" (UID: "e07948dd-2010-421c-aa97-9e41e3294264") : object "kube-system"/"coredns" not registered
	Nov 04 11:45:25 test-preload-666574 kubelet[1117]: E1104 11:45:25.038828    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7n9sb" podUID=e07948dd-2010-421c-aa97-9e41e3294264
	
	
	==> storage-provisioner [7d111ec953f3c5e21a231124bf500f4266a02c4799988fc2dbeb3c13bc89a796] <==
	I1104 11:45:22.045924       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-666574 -n test-preload-666574
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-666574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-666574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-666574
--- FAIL: TestPreload (168.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m48.661204018s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-313751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-313751" primary control-plane node in "kubernetes-upgrade-313751" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:47:31.510317   62682 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:47:31.510461   62682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:47:31.510473   62682 out.go:358] Setting ErrFile to fd 2...
	I1104 11:47:31.510479   62682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:47:31.510803   62682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:47:31.511726   62682 out.go:352] Setting JSON to false
	I1104 11:47:31.514038   62682 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9002,"bootTime":1730711849,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:47:31.514144   62682 start.go:139] virtualization: kvm guest
	I1104 11:47:31.516354   62682 out.go:177] * [kubernetes-upgrade-313751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:47:31.517970   62682 notify.go:220] Checking for updates...
	I1104 11:47:31.518777   62682 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:47:31.521519   62682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:47:31.524169   62682 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:47:31.528099   62682 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:47:31.530892   62682 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:47:31.533637   62682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:47:31.534998   62682 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:47:31.576157   62682 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 11:47:31.577601   62682 start.go:297] selected driver: kvm2
	I1104 11:47:31.577624   62682 start.go:901] validating driver "kvm2" against <nil>
	I1104 11:47:31.577639   62682 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:47:31.578780   62682 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:47:31.597346   62682 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:47:31.620622   62682 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:47:31.620690   62682 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 11:47:31.620954   62682 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1104 11:47:31.620982   62682 cni.go:84] Creating CNI manager for ""
	I1104 11:47:31.621020   62682 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:47:31.621028   62682 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 11:47:31.621085   62682 start.go:340] cluster config:
	{Name:kubernetes-upgrade-313751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-313751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:47:31.621255   62682 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:47:31.623050   62682 out.go:177] * Starting "kubernetes-upgrade-313751" primary control-plane node in "kubernetes-upgrade-313751" cluster
	I1104 11:47:31.624552   62682 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 11:47:31.624598   62682 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 11:47:31.624608   62682 cache.go:56] Caching tarball of preloaded images
	I1104 11:47:31.624699   62682 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:47:31.624711   62682 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 11:47:31.625133   62682 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/config.json ...
	I1104 11:47:31.625165   62682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/config.json: {Name:mk34763a01a5397dcdb014e7584f7fbdf21cb5de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:47:31.625359   62682 start.go:360] acquireMachinesLock for kubernetes-upgrade-313751: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:47:52.837844   62682 start.go:364] duration metric: took 21.212438244s to acquireMachinesLock for "kubernetes-upgrade-313751"
	I1104 11:47:52.837912   62682 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-313751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-313751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 11:47:52.838029   62682 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 11:47:52.840461   62682 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 11:47:52.840691   62682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:47:52.840729   62682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:47:52.857212   62682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I1104 11:47:52.857599   62682 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:47:52.858154   62682 main.go:141] libmachine: Using API Version  1
	I1104 11:47:52.858173   62682 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:47:52.858513   62682 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:47:52.858667   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetMachineName
	I1104 11:47:52.858836   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:47:52.859004   62682 start.go:159] libmachine.API.Create for "kubernetes-upgrade-313751" (driver="kvm2")
	I1104 11:47:52.859034   62682 client.go:168] LocalClient.Create starting
	I1104 11:47:52.859068   62682 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 11:47:52.859250   62682 main.go:141] libmachine: Decoding PEM data...
	I1104 11:47:52.859279   62682 main.go:141] libmachine: Parsing certificate...
	I1104 11:47:52.859350   62682 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 11:47:52.859385   62682 main.go:141] libmachine: Decoding PEM data...
	I1104 11:47:52.859401   62682 main.go:141] libmachine: Parsing certificate...
	I1104 11:47:52.859424   62682 main.go:141] libmachine: Running pre-create checks...
	I1104 11:47:52.859434   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .PreCreateCheck
	I1104 11:47:52.859748   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetConfigRaw
	I1104 11:47:52.860214   62682 main.go:141] libmachine: Creating machine...
	I1104 11:47:52.860232   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .Create
	I1104 11:47:52.860349   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Creating KVM machine...
	I1104 11:47:52.861523   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found existing default KVM network
	I1104 11:47:52.862300   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:52.862157   62984 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:f6:dd} reservation:<nil>}
	I1104 11:47:52.862933   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:52.862862   62984 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002483e0}
	I1104 11:47:52.862977   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | created network xml: 
	I1104 11:47:52.862995   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | <network>
	I1104 11:47:52.863006   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |   <name>mk-kubernetes-upgrade-313751</name>
	I1104 11:47:52.863023   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |   <dns enable='no'/>
	I1104 11:47:52.863037   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |   
	I1104 11:47:52.863049   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1104 11:47:52.863068   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |     <dhcp>
	I1104 11:47:52.863080   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1104 11:47:52.863091   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |     </dhcp>
	I1104 11:47:52.863107   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |   </ip>
	I1104 11:47:52.863167   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG |   
	I1104 11:47:52.863193   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | </network>
	I1104 11:47:52.863209   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | 
	I1104 11:47:52.868268   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | trying to create private KVM network mk-kubernetes-upgrade-313751 192.168.50.0/24...
	I1104 11:47:52.940066   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | private KVM network mk-kubernetes-upgrade-313751 192.168.50.0/24 created
	I1104 11:47:52.940107   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:52.940013   62984 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:47:52.940119   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751 ...
	I1104 11:47:52.940137   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 11:47:52.940195   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 11:47:53.179601   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:53.179481   62984 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa...
	I1104 11:47:53.251226   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:53.251077   62984 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/kubernetes-upgrade-313751.rawdisk...
	I1104 11:47:53.251260   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Writing magic tar header
	I1104 11:47:53.251273   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Writing SSH key tar header
	I1104 11:47:53.251285   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:53.251197   62984 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751 ...
	I1104 11:47:53.251301   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751
	I1104 11:47:53.251383   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751 (perms=drwx------)
	I1104 11:47:53.251412   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 11:47:53.251423   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 11:47:53.251437   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 11:47:53.251458   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 11:47:53.251491   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 11:47:53.251505   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:47:53.251518   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 11:47:53.251536   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Creating domain...
	I1104 11:47:53.251550   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 11:47:53.251562   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 11:47:53.251588   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Checking permissions on dir: /home/jenkins
	I1104 11:47:53.251635   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Checking permissions on dir: /home
	I1104 11:47:53.251656   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Skipping /home - not owner
	I1104 11:47:53.252614   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) define libvirt domain using xml: 
	I1104 11:47:53.252633   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) <domain type='kvm'>
	I1104 11:47:53.252644   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   <name>kubernetes-upgrade-313751</name>
	I1104 11:47:53.252651   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   <memory unit='MiB'>2200</memory>
	I1104 11:47:53.252660   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   <vcpu>2</vcpu>
	I1104 11:47:53.252674   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   <features>
	I1104 11:47:53.252704   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <acpi/>
	I1104 11:47:53.252721   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <apic/>
	I1104 11:47:53.252735   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <pae/>
	I1104 11:47:53.252745   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     
	I1104 11:47:53.252753   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   </features>
	I1104 11:47:53.252764   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   <cpu mode='host-passthrough'>
	I1104 11:47:53.252774   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   
	I1104 11:47:53.252783   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   </cpu>
	I1104 11:47:53.252792   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   <os>
	I1104 11:47:53.252805   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <type>hvm</type>
	I1104 11:47:53.252818   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <boot dev='cdrom'/>
	I1104 11:47:53.252828   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <boot dev='hd'/>
	I1104 11:47:53.252837   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <bootmenu enable='no'/>
	I1104 11:47:53.252846   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   </os>
	I1104 11:47:53.252854   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   <devices>
	I1104 11:47:53.252865   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <disk type='file' device='cdrom'>
	I1104 11:47:53.252880   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/boot2docker.iso'/>
	I1104 11:47:53.252895   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <target dev='hdc' bus='scsi'/>
	I1104 11:47:53.252906   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <readonly/>
	I1104 11:47:53.252921   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     </disk>
	I1104 11:47:53.252932   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <disk type='file' device='disk'>
	I1104 11:47:53.252944   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 11:47:53.252962   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/kubernetes-upgrade-313751.rawdisk'/>
	I1104 11:47:53.252977   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <target dev='hda' bus='virtio'/>
	I1104 11:47:53.252988   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     </disk>
	I1104 11:47:53.252999   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <interface type='network'>
	I1104 11:47:53.253009   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <source network='mk-kubernetes-upgrade-313751'/>
	I1104 11:47:53.253018   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <model type='virtio'/>
	I1104 11:47:53.253023   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     </interface>
	I1104 11:47:53.253033   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <interface type='network'>
	I1104 11:47:53.253058   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <source network='default'/>
	I1104 11:47:53.253078   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <model type='virtio'/>
	I1104 11:47:53.253088   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     </interface>
	I1104 11:47:53.253097   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <serial type='pty'>
	I1104 11:47:53.253108   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <target port='0'/>
	I1104 11:47:53.253118   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     </serial>
	I1104 11:47:53.253126   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <console type='pty'>
	I1104 11:47:53.253137   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <target type='serial' port='0'/>
	I1104 11:47:53.253160   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     </console>
	I1104 11:47:53.253177   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     <rng model='virtio'>
	I1104 11:47:53.253191   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)       <backend model='random'>/dev/random</backend>
	I1104 11:47:53.253210   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     </rng>
	I1104 11:47:53.253221   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     
	I1104 11:47:53.253245   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)     
	I1104 11:47:53.253253   62682 main.go:141] libmachine: (kubernetes-upgrade-313751)   </devices>
	I1104 11:47:53.253267   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) </domain>
	I1104 11:47:53.253280   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) 
	I1104 11:47:53.259962   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:26:b6:f4 in network default
	I1104 11:47:53.260758   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Ensuring networks are active...
	I1104 11:47:53.260786   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:53.261655   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Ensuring network default is active
	I1104 11:47:53.262006   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Ensuring network mk-kubernetes-upgrade-313751 is active
	I1104 11:47:53.262722   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Getting domain xml...
	I1104 11:47:53.263523   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Creating domain...
	I1104 11:47:54.533289   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Waiting to get IP...
	I1104 11:47:54.534309   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:54.534984   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:54.535005   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:54.534956   62984 retry.go:31] will retry after 236.951386ms: waiting for machine to come up
	I1104 11:47:54.773674   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:54.774280   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:54.774317   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:54.774248   62984 retry.go:31] will retry after 327.46528ms: waiting for machine to come up
	I1104 11:47:55.103855   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:55.104465   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:55.104491   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:55.104422   62984 retry.go:31] will retry after 369.169369ms: waiting for machine to come up
	I1104 11:47:55.475276   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:55.475691   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:55.475743   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:55.475675   62984 retry.go:31] will retry after 423.737469ms: waiting for machine to come up
	I1104 11:47:55.901355   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:55.901766   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:55.901819   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:55.901738   62984 retry.go:31] will retry after 634.716809ms: waiting for machine to come up
	I1104 11:47:56.538554   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:56.539035   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:56.539064   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:56.538986   62984 retry.go:31] will retry after 764.435735ms: waiting for machine to come up
	I1104 11:47:57.305065   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:57.305491   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:57.305519   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:57.305461   62984 retry.go:31] will retry after 961.767493ms: waiting for machine to come up
	I1104 11:47:58.269210   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:58.269632   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:58.269660   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:58.269584   62984 retry.go:31] will retry after 1.296841966s: waiting for machine to come up
	I1104 11:47:59.568001   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:47:59.568470   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:47:59.568498   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:47:59.568425   62984 retry.go:31] will retry after 1.627839665s: waiting for machine to come up
	I1104 11:48:01.198262   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:01.198848   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:48:01.198872   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:48:01.198818   62984 retry.go:31] will retry after 1.599125716s: waiting for machine to come up
	I1104 11:48:02.800636   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:02.801110   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:48:02.801135   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:48:02.801055   62984 retry.go:31] will retry after 2.365520794s: waiting for machine to come up
	I1104 11:48:05.168946   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:05.169392   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:48:05.169421   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:48:05.169329   62984 retry.go:31] will retry after 3.029810881s: waiting for machine to come up
	I1104 11:48:08.200936   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:08.201423   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find current IP address of domain kubernetes-upgrade-313751 in network mk-kubernetes-upgrade-313751
	I1104 11:48:08.201450   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | I1104 11:48:08.201374   62984 retry.go:31] will retry after 4.321628531s: waiting for machine to come up
	I1104 11:48:12.526440   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.526868   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Found IP for machine: 192.168.50.39
	I1104 11:48:12.526892   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has current primary IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.526898   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Reserving static IP address...
	I1104 11:48:12.527290   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-313751", mac: "52:54:00:3c:45:f8", ip: "192.168.50.39"} in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.605925   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Reserved static IP address: 192.168.50.39
	I1104 11:48:12.605952   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Getting to WaitForSSH function...
	I1104 11:48:12.605961   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Waiting for SSH to be available...
	I1104 11:48:12.609265   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.609643   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:12.609675   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.609804   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Using SSH client type: external
	I1104 11:48:12.609828   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa (-rw-------)
	I1104 11:48:12.609859   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 11:48:12.609873   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | About to run SSH command:
	I1104 11:48:12.609883   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | exit 0
	I1104 11:48:12.737259   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | SSH cmd err, output: <nil>: 
	I1104 11:48:12.737499   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) KVM machine creation complete!
	I1104 11:48:12.737800   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetConfigRaw
	I1104 11:48:12.738432   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:48:12.738636   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:48:12.738802   62682 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 11:48:12.738820   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetState
	I1104 11:48:12.740032   62682 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 11:48:12.740043   62682 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 11:48:12.740048   62682 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 11:48:12.740053   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:12.742468   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.742941   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:12.742961   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.743161   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:12.743344   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:12.743499   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:12.743623   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:12.743758   62682 main.go:141] libmachine: Using SSH client type: native
	I1104 11:48:12.743927   62682 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1104 11:48:12.743937   62682 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 11:48:12.848589   62682 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:48:12.848614   62682 main.go:141] libmachine: Detecting the provisioner...
	I1104 11:48:12.848625   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:12.851159   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.851615   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:12.851651   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.851939   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:12.852148   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:12.852326   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:12.852494   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:12.852690   62682 main.go:141] libmachine: Using SSH client type: native
	I1104 11:48:12.852909   62682 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1104 11:48:12.852936   62682 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 11:48:12.957731   62682 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 11:48:12.957809   62682 main.go:141] libmachine: found compatible host: buildroot
	I1104 11:48:12.957820   62682 main.go:141] libmachine: Provisioning with buildroot...
	I1104 11:48:12.957827   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetMachineName
	I1104 11:48:12.958060   62682 buildroot.go:166] provisioning hostname "kubernetes-upgrade-313751"
	I1104 11:48:12.958083   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetMachineName
	I1104 11:48:12.958278   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:12.961562   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.961933   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:12.961957   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:12.962222   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:12.962427   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:12.962586   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:12.962760   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:12.962900   62682 main.go:141] libmachine: Using SSH client type: native
	I1104 11:48:12.963065   62682 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1104 11:48:12.963077   62682 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-313751 && echo "kubernetes-upgrade-313751" | sudo tee /etc/hostname
	I1104 11:48:13.083183   62682 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-313751
	
	I1104 11:48:13.083212   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.086990   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.087365   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.087393   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.087621   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:13.087802   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.087969   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.088093   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:13.088254   62682 main.go:141] libmachine: Using SSH client type: native
	I1104 11:48:13.088485   62682 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1104 11:48:13.088510   62682 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-313751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-313751/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-313751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:48:13.198043   62682 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:48:13.198073   62682 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:48:13.198109   62682 buildroot.go:174] setting up certificates
	I1104 11:48:13.198124   62682 provision.go:84] configureAuth start
	I1104 11:48:13.198142   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetMachineName
	I1104 11:48:13.198451   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetIP
	I1104 11:48:13.201292   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.201754   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.201781   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.201965   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.204147   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.204507   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.204534   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.204647   62682 provision.go:143] copyHostCerts
	I1104 11:48:13.204708   62682 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:48:13.204727   62682 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:48:13.204795   62682 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:48:13.204898   62682 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:48:13.204910   62682 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:48:13.204938   62682 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:48:13.205018   62682 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:48:13.205028   62682 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:48:13.205057   62682 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:48:13.205118   62682 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-313751 san=[127.0.0.1 192.168.50.39 kubernetes-upgrade-313751 localhost minikube]
	I1104 11:48:13.329774   62682 provision.go:177] copyRemoteCerts
	I1104 11:48:13.329835   62682 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:48:13.329867   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.332906   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.333266   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.333309   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.333532   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:13.333733   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.333886   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:13.333992   62682 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa Username:docker}
	I1104 11:48:13.415427   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:48:13.446829   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1104 11:48:13.473647   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 11:48:13.499304   62682 provision.go:87] duration metric: took 301.164463ms to configureAuth
	I1104 11:48:13.499329   62682 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:48:13.499467   62682 config.go:182] Loaded profile config "kubernetes-upgrade-313751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 11:48:13.499525   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.502509   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.502900   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.502920   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.503172   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:13.503373   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.503568   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.503718   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:13.503894   62682 main.go:141] libmachine: Using SSH client type: native
	I1104 11:48:13.504062   62682 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1104 11:48:13.504081   62682 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:48:13.742871   62682 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:48:13.742896   62682 main.go:141] libmachine: Checking connection to Docker...
	I1104 11:48:13.742908   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetURL
	I1104 11:48:13.744203   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Using libvirt version 6000000
	I1104 11:48:13.746893   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.747269   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.747298   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.747449   62682 main.go:141] libmachine: Docker is up and running!
	I1104 11:48:13.747463   62682 main.go:141] libmachine: Reticulating splines...
	I1104 11:48:13.747471   62682 client.go:171] duration metric: took 20.888426818s to LocalClient.Create
	I1104 11:48:13.747496   62682 start.go:167] duration metric: took 20.888493345s to libmachine.API.Create "kubernetes-upgrade-313751"
	I1104 11:48:13.747509   62682 start.go:293] postStartSetup for "kubernetes-upgrade-313751" (driver="kvm2")
	I1104 11:48:13.747523   62682 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:48:13.747545   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:48:13.747775   62682 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:48:13.747799   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.750427   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.750833   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.750867   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.751019   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:13.751184   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.751374   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:13.751520   62682 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa Username:docker}
	I1104 11:48:13.831609   62682 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:48:13.836170   62682 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:48:13.836205   62682 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:48:13.836268   62682 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:48:13.836346   62682 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:48:13.836437   62682 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:48:13.845903   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:48:13.870284   62682 start.go:296] duration metric: took 122.762651ms for postStartSetup
	I1104 11:48:13.870335   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetConfigRaw
	I1104 11:48:13.870928   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetIP
	I1104 11:48:13.873947   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.874341   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.874381   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.874570   62682 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/config.json ...
	I1104 11:48:13.874737   62682 start.go:128] duration metric: took 21.036697283s to createHost
	I1104 11:48:13.874759   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.877029   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.877397   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.877429   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.877593   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:13.877785   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.877927   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.878153   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:13.878319   62682 main.go:141] libmachine: Using SSH client type: native
	I1104 11:48:13.878506   62682 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1104 11:48:13.878518   62682 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:48:13.985591   62682 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730720893.955609104
	
	I1104 11:48:13.985612   62682 fix.go:216] guest clock: 1730720893.955609104
	I1104 11:48:13.985621   62682 fix.go:229] Guest: 2024-11-04 11:48:13.955609104 +0000 UTC Remote: 2024-11-04 11:48:13.874749045 +0000 UTC m=+42.418390438 (delta=80.860059ms)
	I1104 11:48:13.985643   62682 fix.go:200] guest clock delta is within tolerance: 80.860059ms
	I1104 11:48:13.985648   62682 start.go:83] releasing machines lock for "kubernetes-upgrade-313751", held for 21.147769079s
	I1104 11:48:13.985678   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:48:13.985942   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetIP
	I1104 11:48:13.988765   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.989270   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.989305   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.989424   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:48:13.989965   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:48:13.990159   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:48:13.990256   62682 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:48:13.990292   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.990364   62682 ssh_runner.go:195] Run: cat /version.json
	I1104 11:48:13.990381   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:48:13.994071   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.994324   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.994465   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.994502   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.994637   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:13.994659   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:13.994695   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:13.994826   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:48:13.994899   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.995020   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:48:13.995074   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:13.995176   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:48:13.995252   62682 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa Username:docker}
	I1104 11:48:13.995309   62682 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa Username:docker}
	I1104 11:48:14.105076   62682 ssh_runner.go:195] Run: systemctl --version
	I1104 11:48:14.112793   62682 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:48:14.284105   62682 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:48:14.289974   62682 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:48:14.290052   62682 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:48:14.306851   62682 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 11:48:14.306877   62682 start.go:495] detecting cgroup driver to use...
	I1104 11:48:14.306946   62682 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:48:14.327929   62682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:48:14.345729   62682 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:48:14.345781   62682 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:48:14.361373   62682 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:48:14.376305   62682 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:48:14.507699   62682 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:48:14.686607   62682 docker.go:233] disabling docker service ...
	I1104 11:48:14.686669   62682 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:48:14.712271   62682 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:48:14.727410   62682 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:48:14.854856   62682 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:48:14.977836   62682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:48:14.992883   62682 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:48:15.012127   62682 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 11:48:15.012197   62682 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:48:15.023525   62682 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:48:15.023587   62682 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:48:15.034899   62682 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:48:15.046124   62682 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:48:15.057346   62682 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:48:15.067402   62682 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:48:15.075929   62682 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 11:48:15.075985   62682 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 11:48:15.088574   62682 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:48:15.097542   62682 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:48:15.215713   62682 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:48:15.307032   62682 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:48:15.307093   62682 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:48:15.311775   62682 start.go:563] Will wait 60s for crictl version
	I1104 11:48:15.311831   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:15.315479   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:48:15.357731   62682 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:48:15.357817   62682 ssh_runner.go:195] Run: crio --version
	I1104 11:48:15.388123   62682 ssh_runner.go:195] Run: crio --version
	I1104 11:48:15.420119   62682 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 11:48:15.421670   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetIP
	I1104 11:48:15.424699   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:15.425019   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:48:07 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:48:15.425078   62682 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:48:15.425300   62682 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 11:48:15.429379   62682 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:48:15.442734   62682 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-313751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-313751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:48:15.442855   62682 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 11:48:15.442911   62682 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:48:15.477833   62682 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 11:48:15.477909   62682 ssh_runner.go:195] Run: which lz4
	I1104 11:48:15.481763   62682 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 11:48:15.485961   62682 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 11:48:15.485992   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 11:48:16.923162   62682 crio.go:462] duration metric: took 1.441424892s to copy over tarball
	I1104 11:48:16.923253   62682 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 11:48:19.782084   62682 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85879779s)
	I1104 11:48:19.782111   62682 crio.go:469] duration metric: took 2.858927295s to extract the tarball
	I1104 11:48:19.782120   62682 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 11:48:19.828062   62682 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:48:19.876951   62682 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 11:48:19.876975   62682 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 11:48:19.877049   62682 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:48:19.877070   62682 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:48:19.877328   62682 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:48:19.877352   62682 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 11:48:19.877405   62682 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 11:48:19.877442   62682 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:48:19.877544   62682 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:48:19.877329   62682 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 11:48:19.878372   62682 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:48:19.878832   62682 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 11:48:19.878903   62682 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:48:19.878955   62682 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:48:19.878837   62682 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 11:48:19.879173   62682 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:48:19.879364   62682 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:48:19.879568   62682 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 11:48:20.036967   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:48:20.036967   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 11:48:20.050945   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 11:48:20.053950   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:48:20.059241   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 11:48:20.065105   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:48:20.069245   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:48:20.131573   62682 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 11:48:20.131622   62682 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 11:48:20.131669   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:20.151295   62682 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 11:48:20.151340   62682 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:48:20.151409   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:20.191599   62682 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 11:48:20.191642   62682 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 11:48:20.191686   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:20.220379   62682 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 11:48:20.220431   62682 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:48:20.220479   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:20.271619   62682 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 11:48:20.271635   62682 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 11:48:20.271658   62682 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 11:48:20.271664   62682 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:48:20.271706   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:20.271756   62682 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 11:48:20.271707   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:20.271774   62682 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:48:20.271794   62682 ssh_runner.go:195] Run: which crictl
	I1104 11:48:20.271855   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 11:48:20.271876   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:48:20.271900   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 11:48:20.271958   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:48:20.380620   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:48:20.380633   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:48:20.380696   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:48:20.380738   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 11:48:20.380756   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 11:48:20.380776   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:48:20.380809   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 11:48:20.574501   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:48:20.574544   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:48:20.574587   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:48:20.574627   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:48:20.574646   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 11:48:20.574700   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 11:48:20.574743   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 11:48:20.734063   62682 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 11:48:20.734145   62682 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 11:48:20.734221   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 11:48:20.737211   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:48:20.737277   62682 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 11:48:20.737278   62682 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 11:48:20.737349   62682 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:48:20.778538   62682 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 11:48:20.804315   62682 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 11:48:20.804368   62682 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 11:48:20.838499   62682 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:48:20.982567   62682 cache_images.go:92] duration metric: took 1.105573931s to LoadCachedImages
	W1104 11:48:20.982651   62682 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1104 11:48:20.982665   62682 kubeadm.go:934] updating node { 192.168.50.39 8443 v1.20.0 crio true true} ...
	I1104 11:48:20.982784   62682 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-313751 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-313751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:48:20.982863   62682 ssh_runner.go:195] Run: crio config
	I1104 11:48:21.031964   62682 cni.go:84] Creating CNI manager for ""
	I1104 11:48:21.031994   62682 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:48:21.032005   62682 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:48:21.032029   62682 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.39 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-313751 NodeName:kubernetes-upgrade-313751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 11:48:21.032200   62682 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-313751"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:48:21.032273   62682 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 11:48:21.044584   62682 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:48:21.044658   62682 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 11:48:21.054634   62682 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1104 11:48:21.072486   62682 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:48:21.090530   62682 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 11:48:21.110598   62682 ssh_runner.go:195] Run: grep 192.168.50.39	control-plane.minikube.internal$ /etc/hosts
	I1104 11:48:21.114595   62682 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:48:21.128040   62682 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:48:21.267750   62682 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:48:21.288418   62682 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751 for IP: 192.168.50.39
	I1104 11:48:21.288452   62682 certs.go:194] generating shared ca certs ...
	I1104 11:48:21.288471   62682 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:48:21.288635   62682 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:48:21.288690   62682 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:48:21.288705   62682 certs.go:256] generating profile certs ...
	I1104 11:48:21.288779   62682 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.key
	I1104 11:48:21.288793   62682 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.crt with IP's: []
	I1104 11:48:21.394669   62682 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.crt ...
	I1104 11:48:21.394696   62682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.crt: {Name:mkd823e7e5ebe551affa86fc227e79447aeade4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:48:21.488111   62682 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.key ...
	I1104 11:48:21.488144   62682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.key: {Name:mk3678ce7370197cb3889c344a36384acb8d5f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:48:21.488326   62682 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.key.c5fb1e24
	I1104 11:48:21.488350   62682 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.crt.c5fb1e24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.39]
	I1104 11:48:21.999225   62682 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.crt.c5fb1e24 ...
	I1104 11:48:21.999257   62682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.crt.c5fb1e24: {Name:mk5f9a2b100f81af254de17edc0247001d48d400 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:48:22.048788   62682 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.key.c5fb1e24 ...
	I1104 11:48:22.048831   62682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.key.c5fb1e24: {Name:mk8001915921ead2b3b79f1f8972b48d4cbde3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:48:22.048989   62682 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.crt.c5fb1e24 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.crt
	I1104 11:48:22.049091   62682 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.key.c5fb1e24 -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.key
	I1104 11:48:22.049165   62682 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.key
	I1104 11:48:22.049188   62682 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.crt with IP's: []
	I1104 11:48:22.160525   62682 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.crt ...
	I1104 11:48:22.160556   62682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.crt: {Name:mk086b786540d4dfe69e2e30978b2939a793d10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:48:22.160723   62682 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.key ...
	I1104 11:48:22.160739   62682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.key: {Name:mk44036c894bdf26919dba1453301dae554786b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:48:22.160925   62682 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:48:22.160972   62682 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:48:22.160987   62682 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:48:22.161019   62682 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:48:22.161055   62682 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:48:22.161088   62682 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:48:22.161151   62682 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:48:22.161775   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:48:22.195903   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:48:22.221261   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:48:22.247323   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:48:22.275508   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1104 11:48:22.303812   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 11:48:22.422747   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:48:22.445538   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 11:48:22.467745   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:48:22.490780   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:48:22.513853   62682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:48:22.536782   62682 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:48:22.553422   62682 ssh_runner.go:195] Run: openssl version
	I1104 11:48:22.560769   62682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:48:22.572657   62682 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:48:22.577648   62682 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:48:22.577706   62682 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:48:22.584102   62682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:48:22.598943   62682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:48:22.610306   62682 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:48:22.614794   62682 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:48:22.614841   62682 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:48:22.620630   62682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:48:22.631176   62682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:48:22.641989   62682 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:48:22.646455   62682 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:48:22.646520   62682 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:48:22.652486   62682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:48:22.663494   62682 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:48:22.667715   62682 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 11:48:22.667775   62682 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-313751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-313751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:48:22.667851   62682 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:48:22.667894   62682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:48:22.708004   62682 cri.go:89] found id: ""
	I1104 11:48:22.708076   62682 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 11:48:22.717693   62682 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 11:48:22.727594   62682 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 11:48:22.737510   62682 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 11:48:22.737531   62682 kubeadm.go:157] found existing configuration files:
	
	I1104 11:48:22.737583   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 11:48:22.747880   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 11:48:22.747938   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 11:48:22.757441   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 11:48:22.766554   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 11:48:22.766604   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 11:48:22.776961   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 11:48:22.785854   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 11:48:22.785918   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 11:48:22.796022   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 11:48:22.805217   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 11:48:22.805292   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 11:48:22.814417   62682 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 11:48:22.918677   62682 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 11:48:22.918772   62682 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 11:48:23.071477   62682 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 11:48:23.071577   62682 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 11:48:23.071735   62682 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 11:48:23.241164   62682 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 11:48:23.351092   62682 out.go:235]   - Generating certificates and keys ...
	I1104 11:48:23.351215   62682 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 11:48:23.351301   62682 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 11:48:23.477618   62682 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 11:48:23.800218   62682 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 11:48:24.037506   62682 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 11:48:24.148156   62682 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 11:48:24.454264   62682 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 11:48:24.454591   62682 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-313751 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I1104 11:48:24.656842   62682 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 11:48:24.657037   62682 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-313751 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I1104 11:48:24.756783   62682 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 11:48:24.930743   62682 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 11:48:25.179129   62682 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 11:48:25.179395   62682 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 11:48:25.435693   62682 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 11:48:25.709058   62682 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 11:48:25.878479   62682 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 11:48:26.465813   62682 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 11:48:26.480949   62682 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 11:48:26.483725   62682 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 11:48:26.483798   62682 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 11:48:26.608887   62682 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 11:48:26.610808   62682 out.go:235]   - Booting up control plane ...
	I1104 11:48:26.610937   62682 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 11:48:26.611902   62682 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 11:48:26.621882   62682 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 11:48:26.622934   62682 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 11:48:26.628235   62682 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 11:49:06.618822   62682 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 11:49:06.619386   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:49:06.619567   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:49:11.620299   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:49:11.620476   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:49:21.620221   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:49:21.620397   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:49:41.620476   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:49:41.620768   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:50:21.622159   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:50:21.622454   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:50:21.622486   62682 kubeadm.go:310] 
	I1104 11:50:21.622522   62682 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 11:50:21.622572   62682 kubeadm.go:310] 		timed out waiting for the condition
	I1104 11:50:21.622588   62682 kubeadm.go:310] 
	I1104 11:50:21.622634   62682 kubeadm.go:310] 	This error is likely caused by:
	I1104 11:50:21.622685   62682 kubeadm.go:310] 		- The kubelet is not running
	I1104 11:50:21.622834   62682 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 11:50:21.622843   62682 kubeadm.go:310] 
	I1104 11:50:21.622928   62682 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 11:50:21.622958   62682 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 11:50:21.622990   62682 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 11:50:21.622996   62682 kubeadm.go:310] 
	I1104 11:50:21.623101   62682 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 11:50:21.623180   62682 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 11:50:21.623201   62682 kubeadm.go:310] 
	I1104 11:50:21.623285   62682 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 11:50:21.623365   62682 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 11:50:21.623433   62682 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 11:50:21.623494   62682 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 11:50:21.623501   62682 kubeadm.go:310] 
	I1104 11:50:21.624133   62682 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 11:50:21.624263   62682 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 11:50:21.624393   62682 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1104 11:50:21.624508   62682 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-313751 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-313751 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-313751 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-313751 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 11:50:21.624547   62682 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 11:50:22.060335   62682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:50:22.074364   62682 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 11:50:22.084051   62682 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 11:50:22.084074   62682 kubeadm.go:157] found existing configuration files:
	
	I1104 11:50:22.084133   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 11:50:22.093899   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 11:50:22.093967   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 11:50:22.103757   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 11:50:22.113214   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 11:50:22.113300   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 11:50:22.122708   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 11:50:22.131331   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 11:50:22.131379   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 11:50:22.141018   62682 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 11:50:22.150104   62682 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 11:50:22.150152   62682 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 11:50:22.159002   62682 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 11:50:22.232644   62682 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 11:50:22.232735   62682 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 11:50:22.360901   62682 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 11:50:22.360995   62682 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 11:50:22.361102   62682 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 11:50:22.522007   62682 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 11:50:22.524799   62682 out.go:235]   - Generating certificates and keys ...
	I1104 11:50:22.524899   62682 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 11:50:22.524988   62682 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 11:50:22.525068   62682 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 11:50:22.525119   62682 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 11:50:22.525198   62682 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 11:50:22.525276   62682 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 11:50:22.525375   62682 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 11:50:22.525476   62682 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 11:50:22.525598   62682 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 11:50:22.525716   62682 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 11:50:22.525777   62682 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 11:50:22.525850   62682 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 11:50:22.663858   62682 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 11:50:23.035590   62682 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 11:50:23.375596   62682 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 11:50:23.524557   62682 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 11:50:23.543811   62682 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 11:50:23.544627   62682 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 11:50:23.544682   62682 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 11:50:23.682921   62682 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 11:50:23.684929   62682 out.go:235]   - Booting up control plane ...
	I1104 11:50:23.685072   62682 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 11:50:23.698233   62682 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 11:50:23.699443   62682 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 11:50:23.700497   62682 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 11:50:23.704038   62682 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 11:51:03.707072   62682 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 11:51:03.707176   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:51:03.707391   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:51:08.707935   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:51:08.708180   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:51:18.709135   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:51:18.709430   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:51:38.707669   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:51:38.707845   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:52:18.706554   62682 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:52:18.706840   62682 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:52:18.706851   62682 kubeadm.go:310] 
	I1104 11:52:18.706902   62682 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 11:52:18.706982   62682 kubeadm.go:310] 		timed out waiting for the condition
	I1104 11:52:18.707009   62682 kubeadm.go:310] 
	I1104 11:52:18.707067   62682 kubeadm.go:310] 	This error is likely caused by:
	I1104 11:52:18.707109   62682 kubeadm.go:310] 		- The kubelet is not running
	I1104 11:52:18.707209   62682 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 11:52:18.707217   62682 kubeadm.go:310] 
	I1104 11:52:18.707309   62682 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 11:52:18.707339   62682 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 11:52:18.707368   62682 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 11:52:18.707373   62682 kubeadm.go:310] 
	I1104 11:52:18.707468   62682 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 11:52:18.707540   62682 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 11:52:18.707545   62682 kubeadm.go:310] 
	I1104 11:52:18.707644   62682 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 11:52:18.707734   62682 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 11:52:18.707811   62682 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 11:52:18.707915   62682 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 11:52:18.707943   62682 kubeadm.go:310] 
	I1104 11:52:18.708653   62682 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 11:52:18.708771   62682 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 11:52:18.708860   62682 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 11:52:18.709000   62682 kubeadm.go:394] duration metric: took 3m56.041227444s to StartCluster
	I1104 11:52:18.709051   62682 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 11:52:18.709114   62682 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 11:52:18.752499   62682 cri.go:89] found id: ""
	I1104 11:52:18.752529   62682 logs.go:282] 0 containers: []
	W1104 11:52:18.752541   62682 logs.go:284] No container was found matching "kube-apiserver"
	I1104 11:52:18.752549   62682 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 11:52:18.752618   62682 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 11:52:18.793024   62682 cri.go:89] found id: ""
	I1104 11:52:18.793055   62682 logs.go:282] 0 containers: []
	W1104 11:52:18.793067   62682 logs.go:284] No container was found matching "etcd"
	I1104 11:52:18.793081   62682 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 11:52:18.793147   62682 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 11:52:18.829211   62682 cri.go:89] found id: ""
	I1104 11:52:18.829255   62682 logs.go:282] 0 containers: []
	W1104 11:52:18.829267   62682 logs.go:284] No container was found matching "coredns"
	I1104 11:52:18.829274   62682 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 11:52:18.829336   62682 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 11:52:18.868783   62682 cri.go:89] found id: ""
	I1104 11:52:18.868812   62682 logs.go:282] 0 containers: []
	W1104 11:52:18.868823   62682 logs.go:284] No container was found matching "kube-scheduler"
	I1104 11:52:18.868846   62682 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 11:52:18.868912   62682 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 11:52:18.905483   62682 cri.go:89] found id: ""
	I1104 11:52:18.905512   62682 logs.go:282] 0 containers: []
	W1104 11:52:18.905524   62682 logs.go:284] No container was found matching "kube-proxy"
	I1104 11:52:18.905532   62682 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 11:52:18.905599   62682 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 11:52:18.946490   62682 cri.go:89] found id: ""
	I1104 11:52:18.946520   62682 logs.go:282] 0 containers: []
	W1104 11:52:18.946538   62682 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 11:52:18.946546   62682 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 11:52:18.946603   62682 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 11:52:18.981976   62682 cri.go:89] found id: ""
	I1104 11:52:18.982012   62682 logs.go:282] 0 containers: []
	W1104 11:52:18.982024   62682 logs.go:284] No container was found matching "kindnet"
	I1104 11:52:18.982036   62682 logs.go:123] Gathering logs for describe nodes ...
	I1104 11:52:18.982051   62682 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 11:52:19.120540   62682 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 11:52:19.120569   62682 logs.go:123] Gathering logs for CRI-O ...
	I1104 11:52:19.120584   62682 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 11:52:19.268213   62682 logs.go:123] Gathering logs for container status ...
	I1104 11:52:19.268263   62682 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 11:52:19.315189   62682 logs.go:123] Gathering logs for kubelet ...
	I1104 11:52:19.315218   62682 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 11:52:19.373871   62682 logs.go:123] Gathering logs for dmesg ...
	I1104 11:52:19.373903   62682 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1104 11:52:19.390299   62682 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 11:52:19.390379   62682 out.go:270] * 
	* 
	W1104 11:52:19.390465   62682 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 11:52:19.390545   62682 out.go:270] * 
	* 
	W1104 11:52:19.391806   62682 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 11:52:19.589962   62682 out.go:201] 
	W1104 11:52:19.753612   62682 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 11:52:19.753737   62682 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 11:52:19.753769   62682 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 11:52:19.890559   62682 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-313751
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-313751: (5.609459786s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-313751 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-313751 status --format={{.Host}}: exit status 7 (77.415348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.398008366s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-313751 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.89911ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-313751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-313751
	    minikube start -p kubernetes-upgrade-313751 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3137512 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-313751 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-313751 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.321359762s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-11-04 11:53:59.723242811 +0000 UTC m=+4631.078301593
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-313751 -n kubernetes-upgrade-313751
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-313751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-313751 logs -n 25: (1.753128779s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-options-530572                | cert-options-530572       | jenkins | v1.34.0 | 04 Nov 24 11:50 UTC | 04 Nov 24 11:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-278038 sudo           | NoKubernetes-278038       | jenkins | v1.34.0 | 04 Nov 24 11:50 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-278038                | NoKubernetes-278038       | jenkins | v1.34.0 | 04 Nov 24 11:50 UTC | 04 Nov 24 11:50 UTC |
	| start   | -p NoKubernetes-278038                | NoKubernetes-278038       | jenkins | v1.34.0 | 04 Nov 24 11:50 UTC | 04 Nov 24 11:51 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-530572 ssh               | cert-options-530572       | jenkins | v1.34.0 | 04 Nov 24 11:51 UTC | 04 Nov 24 11:51 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-530572 -- sudo        | cert-options-530572       | jenkins | v1.34.0 | 04 Nov 24 11:51 UTC | 04 Nov 24 11:51 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-530572                | cert-options-530572       | jenkins | v1.34.0 | 04 Nov 24 11:51 UTC | 04 Nov 24 11:51 UTC |
	| start   | -p running-upgrade-975889             | minikube                  | jenkins | v1.26.0 | 04 Nov 24 11:51 UTC | 04 Nov 24 11:52 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-278038 sudo           | NoKubernetes-278038       | jenkins | v1.34.0 | 04 Nov 24 11:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-278038                | NoKubernetes-278038       | jenkins | v1.34.0 | 04 Nov 24 11:51 UTC | 04 Nov 24 11:51 UTC |
	| start   | -p stopped-upgrade-894910             | minikube                  | jenkins | v1.26.0 | 04 Nov 24 11:51 UTC | 04 Nov 24 11:52 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-975889             | running-upgrade-975889    | jenkins | v1.34.0 | 04 Nov 24 11:52 UTC | 04 Nov 24 11:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-313751          | kubernetes-upgrade-313751 | jenkins | v1.34.0 | 04 Nov 24 11:52 UTC | 04 Nov 24 11:52 UTC |
	| start   | -p kubernetes-upgrade-313751          | kubernetes-upgrade-313751 | jenkins | v1.34.0 | 04 Nov 24 11:52 UTC | 04 Nov 24 11:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-894910 stop           | minikube                  | jenkins | v1.26.0 | 04 Nov 24 11:52 UTC | 04 Nov 24 11:52 UTC |
	| start   | -p stopped-upgrade-894910             | stopped-upgrade-894910    | jenkins | v1.34.0 | 04 Nov 24 11:52 UTC | 04 Nov 24 11:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-292397             | cert-expiration-292397    | jenkins | v1.34.0 | 04 Nov 24 11:52 UTC | 04 Nov 24 11:53 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-313751          | kubernetes-upgrade-313751 | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-313751          | kubernetes-upgrade-313751 | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC | 04 Nov 24 11:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-975889             | running-upgrade-975889    | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC | 04 Nov 24 11:53 UTC |
	| start   | -p pause-706038 --memory=2048         | pause-706038              | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-292397             | cert-expiration-292397    | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC | 04 Nov 24 11:53 UTC |
	| start   | -p auto-528108 --memory=3072          | auto-528108               | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-894910             | stopped-upgrade-894910    | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC | 04 Nov 24 11:53 UTC |
	| start   | -p kindnet-528108                     | kindnet-528108            | jenkins | v1.34.0 | 04 Nov 24 11:53 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 11:53:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 11:53:34.542216   70276 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:53:34.542518   70276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:53:34.542539   70276 out.go:358] Setting ErrFile to fd 2...
	I1104 11:53:34.542545   70276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:53:34.542916   70276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:53:34.543698   70276 out.go:352] Setting JSON to false
	I1104 11:53:34.544643   70276 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9366,"bootTime":1730711849,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:53:34.544748   70276 start.go:139] virtualization: kvm guest
	I1104 11:53:34.546866   70276 out.go:177] * [kindnet-528108] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:53:34.548301   70276 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:53:34.548338   70276 notify.go:220] Checking for updates...
	I1104 11:53:34.550883   70276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:53:34.552199   70276 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:53:34.553553   70276 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:53:34.554854   70276 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:53:34.556089   70276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:53:34.558089   70276 config.go:182] Loaded profile config "auto-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:53:34.558257   70276 config.go:182] Loaded profile config "kubernetes-upgrade-313751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:53:34.558407   70276 config.go:182] Loaded profile config "pause-706038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:53:34.558536   70276 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:53:34.595613   70276 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 11:53:34.597060   70276 start.go:297] selected driver: kvm2
	I1104 11:53:34.597078   70276 start.go:901] validating driver "kvm2" against <nil>
	I1104 11:53:34.597109   70276 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:53:34.598239   70276 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:53:34.598354   70276 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:53:34.615688   70276 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:53:34.615762   70276 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 11:53:34.616124   70276 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:53:34.616176   70276 cni.go:84] Creating CNI manager for "kindnet"
	I1104 11:53:34.616186   70276 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1104 11:53:34.616258   70276 start.go:340] cluster config:
	{Name:kindnet-528108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-528108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:53:34.616400   70276 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:53:34.618456   70276 out.go:177] * Starting "kindnet-528108" primary control-plane node in "kindnet-528108" cluster
	I1104 11:53:31.860340   70141 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:53:31.860412   70141 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:53:31.860424   70141 cache.go:56] Caching tarball of preloaded images
	I1104 11:53:31.860510   70141 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:53:31.860520   70141 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:53:31.860619   70141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/config.json ...
	I1104 11:53:31.860635   70141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/config.json: {Name:mkf967e3c72f57da93c0a4edc4429f8ec3f18f5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:31.860862   70141 start.go:360] acquireMachinesLock for auto-528108: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:53:34.290335   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:34.290726   69851 main.go:141] libmachine: (pause-706038) DBG | unable to find current IP address of domain pause-706038 in network mk-pause-706038
	I1104 11:53:34.290738   69851 main.go:141] libmachine: (pause-706038) DBG | I1104 11:53:34.290704   69873 retry.go:31] will retry after 1.99967869s: waiting for machine to come up
	I1104 11:53:36.293045   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:36.293734   69851 main.go:141] libmachine: (pause-706038) DBG | unable to find current IP address of domain pause-706038 in network mk-pause-706038
	I1104 11:53:36.293757   69851 main.go:141] libmachine: (pause-706038) DBG | I1104 11:53:36.293687   69873 retry.go:31] will retry after 3.452607285s: waiting for machine to come up
	I1104 11:53:34.619884   70276 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:53:34.619920   70276 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 11:53:34.619926   70276 cache.go:56] Caching tarball of preloaded images
	I1104 11:53:34.619996   70276 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:53:34.620006   70276 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 11:53:34.620088   70276 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/config.json ...
	I1104 11:53:34.620111   70276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/config.json: {Name:mk150774bb2c8fd0328f657a247d2cba13e71d7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:34.620244   70276 start.go:360] acquireMachinesLock for kindnet-528108: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:53:39.747865   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:39.748292   69851 main.go:141] libmachine: (pause-706038) DBG | unable to find current IP address of domain pause-706038 in network mk-pause-706038
	I1104 11:53:39.748312   69851 main.go:141] libmachine: (pause-706038) DBG | I1104 11:53:39.748249   69873 retry.go:31] will retry after 4.314099268s: waiting for machine to come up
	I1104 11:53:44.063511   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:44.063968   69851 main.go:141] libmachine: (pause-706038) DBG | unable to find current IP address of domain pause-706038 in network mk-pause-706038
	I1104 11:53:44.063983   69851 main.go:141] libmachine: (pause-706038) DBG | I1104 11:53:44.063914   69873 retry.go:31] will retry after 3.724109881s: waiting for machine to come up
	I1104 11:53:47.792705   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:47.793261   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has current primary IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:47.793329   69851 main.go:141] libmachine: (pause-706038) Found IP for machine: 192.168.39.132
	I1104 11:53:47.793347   69851 main.go:141] libmachine: (pause-706038) Reserving static IP address...
	I1104 11:53:47.793671   69851 main.go:141] libmachine: (pause-706038) DBG | unable to find host DHCP lease matching {name: "pause-706038", mac: "52:54:00:d3:b4:4d", ip: "192.168.39.132"} in network mk-pause-706038
	I1104 11:53:47.869360   69851 main.go:141] libmachine: (pause-706038) DBG | Getting to WaitForSSH function...
	I1104 11:53:47.869385   69851 main.go:141] libmachine: (pause-706038) Reserved static IP address: 192.168.39.132
	I1104 11:53:47.869429   69851 main.go:141] libmachine: (pause-706038) Waiting for SSH to be available...
	I1104 11:53:47.872217   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:47.872671   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:47.872693   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:47.872831   69851 main.go:141] libmachine: (pause-706038) DBG | Using SSH client type: external
	I1104 11:53:47.872864   69851 main.go:141] libmachine: (pause-706038) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/pause-706038/id_rsa (-rw-------)
	I1104 11:53:47.872910   69851 main.go:141] libmachine: (pause-706038) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/pause-706038/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 11:53:47.872918   69851 main.go:141] libmachine: (pause-706038) DBG | About to run SSH command:
	I1104 11:53:47.872932   69851 main.go:141] libmachine: (pause-706038) DBG | exit 0
	I1104 11:53:47.997513   69851 main.go:141] libmachine: (pause-706038) DBG | SSH cmd err, output: <nil>: 
	I1104 11:53:47.997776   69851 main.go:141] libmachine: (pause-706038) KVM machine creation complete!
	I1104 11:53:47.998151   69851 main.go:141] libmachine: (pause-706038) Calling .GetConfigRaw
	I1104 11:53:47.999376   69851 main.go:141] libmachine: (pause-706038) Calling .DriverName
	I1104 11:53:47.999572   69851 main.go:141] libmachine: (pause-706038) Calling .DriverName
	I1104 11:53:47.999722   69851 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 11:53:47.999737   69851 main.go:141] libmachine: (pause-706038) Calling .GetState
	I1104 11:53:48.000980   69851 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 11:53:48.000988   69851 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 11:53:48.000994   69851 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 11:53:48.001001   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:48.003541   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.003890   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.004003   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.004066   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:48.004310   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.004466   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.004572   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:48.004696   69851 main.go:141] libmachine: Using SSH client type: native
	I1104 11:53:48.004930   69851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I1104 11:53:48.004942   69851 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 11:53:48.100606   69851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:53:48.100618   69851 main.go:141] libmachine: Detecting the provisioner...
	I1104 11:53:48.100625   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:48.103480   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.103821   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.103857   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.103965   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:48.104167   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.104328   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.104419   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:48.104575   69851 main.go:141] libmachine: Using SSH client type: native
	I1104 11:53:48.104800   69851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I1104 11:53:48.104807   69851 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 11:53:48.205684   69851 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 11:53:48.205747   69851 main.go:141] libmachine: found compatible host: buildroot
	I1104 11:53:48.205753   69851 main.go:141] libmachine: Provisioning with buildroot...
	I1104 11:53:48.205763   69851 main.go:141] libmachine: (pause-706038) Calling .GetMachineName
	I1104 11:53:48.206045   69851 buildroot.go:166] provisioning hostname "pause-706038"
	I1104 11:53:48.206068   69851 main.go:141] libmachine: (pause-706038) Calling .GetMachineName
	I1104 11:53:48.206236   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:48.209266   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.209617   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.209637   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.209846   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:48.209999   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.210158   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.210262   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:48.210362   69851 main.go:141] libmachine: Using SSH client type: native
	I1104 11:53:48.210561   69851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I1104 11:53:48.210568   69851 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-706038 && echo "pause-706038" | sudo tee /etc/hostname
	I1104 11:53:49.306211   70141 start.go:364] duration metric: took 17.445318256s to acquireMachinesLock for "auto-528108"
	I1104 11:53:49.306281   70141 start.go:93] Provisioning new machine with config: &{Name:auto-528108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:auto-528108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 11:53:49.306421   70141 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 11:53:47.598195   69560 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106 2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa 76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639 d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666 588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93 9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0 adc9bdd21f7c5f46f09fef01bfc6a3b04b3761a62f2a3122003fd6b5b80d4edf 354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c 852b4499b0e696a7cd92d6be06afbfd09a2f9e72ec911f2bcd878888b00e6034 4c466b69cb5fe70e4c64106cc663327a0dda4ff7d51b2837113a54bb9ab28ca8 3c786a824e48082bce78fed0d6633eedb20c044132d31fca20269909e8df024a 55ce33081566b8e7e42426357a6b469bd750de7a4a1d4f21a5dd292224293e12 b8f3b3
8e875394685350eabea55b34477177aca1ac8c1426adf0f6cb14c616b7 385e93e469015f6ca7fda9d0a65e1f695f28641d2917c518cdecd1064f929ee9: (20.701581851s)
	W1104 11:53:47.598302   69560 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106 2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa 76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639 d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666 588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93 9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0 adc9bdd21f7c5f46f09fef01bfc6a3b04b3761a62f2a3122003fd6b5b80d4edf 354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c 852b4499b0e696a7cd92d6be06afbfd09a2f9e72ec911f2bcd878888b00e6034 4c466b69cb5fe70e4c64106cc663327a0dda4ff7d51b2837113a54bb9ab28ca8 3c786a824e48082bce78fed0d6633eedb20c044132d31fca20269909e8df024a 55ce33
081566b8e7e42426357a6b469bd750de7a4a1d4f21a5dd292224293e12 b8f3b38e875394685350eabea55b34477177aca1ac8c1426adf0f6cb14c616b7 385e93e469015f6ca7fda9d0a65e1f695f28641d2917c518cdecd1064f929ee9: Process exited with status 1
	stdout:
	d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e
	fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a
	b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106
	2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa
	76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639
	d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666
	588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93
	9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0
	adc9bdd21f7c5f46f09fef01bfc6a3b04b3761a62f2a3122003fd6b5b80d4edf
	
	stderr:
	E1104 11:53:47.582403    3288 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c\": container with ID starting with 354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c not found: ID does not exist" containerID="354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c"
	time="2024-11-04T11:53:47Z" level=fatal msg="stopping the container \"354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c\": rpc error: code = NotFound desc = could not find container \"354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c\": container with ID starting with 354e879c6899d9ddc0ebdb1aac8689cd5e41d2f484abed8609d6bfaf28c1559c not found: ID does not exist"
	I1104 11:53:47.598359   69560 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 11:53:47.643304   69560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 11:53:47.652819   69560 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Nov  4 11:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Nov  4 11:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Nov  4 11:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Nov  4 11:52 /etc/kubernetes/scheduler.conf
	
	I1104 11:53:47.652870   69560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 11:53:47.661319   69560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 11:53:47.669616   69560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 11:53:47.677665   69560 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:53:47.677716   69560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 11:53:47.686101   69560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 11:53:47.694330   69560 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:53:47.694383   69560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 11:53:47.702656   69560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 11:53:47.711365   69560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:53:47.756064   69560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:53:49.025357   69560 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.269251294s)
	I1104 11:53:49.025393   69560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:53:49.276343   69560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:53:49.378360   69560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:53:48.324023   69851 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-706038
	
	I1104 11:53:48.324046   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:48.327276   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.327598   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.327618   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.327842   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:48.328074   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.328228   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.328360   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:48.328489   69851 main.go:141] libmachine: Using SSH client type: native
	I1104 11:53:48.328674   69851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I1104 11:53:48.328684   69851 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-706038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-706038/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-706038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:53:48.433912   69851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:53:48.433937   69851 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:53:48.433962   69851 buildroot.go:174] setting up certificates
	I1104 11:53:48.433971   69851 provision.go:84] configureAuth start
	I1104 11:53:48.433978   69851 main.go:141] libmachine: (pause-706038) Calling .GetMachineName
	I1104 11:53:48.434253   69851 main.go:141] libmachine: (pause-706038) Calling .GetIP
	I1104 11:53:48.436702   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.436960   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.436973   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.437146   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:48.439401   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.439725   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.439741   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.439879   69851 provision.go:143] copyHostCerts
	I1104 11:53:48.439930   69851 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:53:48.439943   69851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:53:48.440005   69851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:53:48.440089   69851 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:53:48.440093   69851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:53:48.440113   69851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:53:48.440162   69851 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:53:48.440164   69851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:53:48.440183   69851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:53:48.440225   69851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.pause-706038 san=[127.0.0.1 192.168.39.132 localhost minikube pause-706038]
	I1104 11:53:48.679928   69851 provision.go:177] copyRemoteCerts
	I1104 11:53:48.679974   69851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:53:48.679994   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:48.682745   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.683093   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.683111   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.683270   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:48.683431   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.683564   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:48.683663   69851 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/pause-706038/id_rsa Username:docker}
	I1104 11:53:48.764474   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:53:48.788695   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1104 11:53:48.812302   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 11:53:48.835991   69851 provision.go:87] duration metric: took 402.010103ms to configureAuth
	I1104 11:53:48.836012   69851 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:53:48.836214   69851 config.go:182] Loaded profile config "pause-706038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:53:48.836272   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:48.838876   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.839233   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:48.839253   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:48.839378   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:48.839550   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.839686   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:48.839771   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:48.839875   69851 main.go:141] libmachine: Using SSH client type: native
	I1104 11:53:48.840068   69851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I1104 11:53:48.840080   69851 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:53:49.063373   69851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:53:49.063386   69851 main.go:141] libmachine: Checking connection to Docker...
	I1104 11:53:49.063394   69851 main.go:141] libmachine: (pause-706038) Calling .GetURL
	I1104 11:53:49.064609   69851 main.go:141] libmachine: (pause-706038) DBG | Using libvirt version 6000000
	I1104 11:53:49.067090   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.067495   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:49.067515   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.067692   69851 main.go:141] libmachine: Docker is up and running!
	I1104 11:53:49.067700   69851 main.go:141] libmachine: Reticulating splines...
	I1104 11:53:49.067709   69851 client.go:171] duration metric: took 25.634235062s to LocalClient.Create
	I1104 11:53:49.067728   69851 start.go:167] duration metric: took 25.634296498s to libmachine.API.Create "pause-706038"
	I1104 11:53:49.067733   69851 start.go:293] postStartSetup for "pause-706038" (driver="kvm2")
	I1104 11:53:49.067742   69851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:53:49.067757   69851 main.go:141] libmachine: (pause-706038) Calling .DriverName
	I1104 11:53:49.067985   69851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:53:49.068014   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:49.070230   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.070566   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:49.070582   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.070725   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:49.070860   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:49.070978   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:49.071081   69851 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/pause-706038/id_rsa Username:docker}
	I1104 11:53:49.154662   69851 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:53:49.159818   69851 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:53:49.159834   69851 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:53:49.159918   69851 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:53:49.160003   69851 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:53:49.160128   69851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:53:49.172354   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:53:49.197630   69851 start.go:296] duration metric: took 129.887374ms for postStartSetup
	I1104 11:53:49.197672   69851 main.go:141] libmachine: (pause-706038) Calling .GetConfigRaw
	I1104 11:53:49.198258   69851 main.go:141] libmachine: (pause-706038) Calling .GetIP
	I1104 11:53:49.200731   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.201056   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:49.201078   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.201382   69851 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/config.json ...
	I1104 11:53:49.201598   69851 start.go:128] duration metric: took 25.794083101s to createHost
	I1104 11:53:49.201632   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:49.203946   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.204258   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:49.204269   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.204380   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:49.204539   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:49.204643   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:49.204743   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:49.204909   69851 main.go:141] libmachine: Using SSH client type: native
	I1104 11:53:49.205096   69851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I1104 11:53:49.205107   69851 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:53:49.306046   69851 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730721229.289196094
	
	I1104 11:53:49.306060   69851 fix.go:216] guest clock: 1730721229.289196094
	I1104 11:53:49.306068   69851 fix.go:229] Guest: 2024-11-04 11:53:49.289196094 +0000 UTC Remote: 2024-11-04 11:53:49.201605593 +0000 UTC m=+25.939231786 (delta=87.590501ms)
	I1104 11:53:49.306116   69851 fix.go:200] guest clock delta is within tolerance: 87.590501ms
	I1104 11:53:49.306122   69851 start.go:83] releasing machines lock for "pause-706038", held for 25.898715485s
	I1104 11:53:49.306151   69851 main.go:141] libmachine: (pause-706038) Calling .DriverName
	I1104 11:53:49.306438   69851 main.go:141] libmachine: (pause-706038) Calling .GetIP
	I1104 11:53:49.309791   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.310256   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:49.310276   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.310495   69851 main.go:141] libmachine: (pause-706038) Calling .DriverName
	I1104 11:53:49.311118   69851 main.go:141] libmachine: (pause-706038) Calling .DriverName
	I1104 11:53:49.311300   69851 main.go:141] libmachine: (pause-706038) Calling .DriverName
	I1104 11:53:49.311387   69851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:53:49.311432   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:49.311501   69851 ssh_runner.go:195] Run: cat /version.json
	I1104 11:53:49.311516   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHHostname
	I1104 11:53:49.314479   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.314735   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.314881   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:49.314900   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.315011   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:49.315132   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:49.315154   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:49.315158   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:49.315315   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:49.315328   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHPort
	I1104 11:53:49.315502   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHKeyPath
	I1104 11:53:49.315499   69851 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/pause-706038/id_rsa Username:docker}
	I1104 11:53:49.315637   69851 main.go:141] libmachine: (pause-706038) Calling .GetSSHUsername
	I1104 11:53:49.315794   69851 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/pause-706038/id_rsa Username:docker}
	I1104 11:53:49.398719   69851 ssh_runner.go:195] Run: systemctl --version
	I1104 11:53:49.428280   69851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:53:49.600397   69851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:53:49.607075   69851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:53:49.607132   69851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:53:49.628299   69851 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 11:53:49.628322   69851 start.go:495] detecting cgroup driver to use...
	I1104 11:53:49.628439   69851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:53:49.646312   69851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:53:49.661599   69851 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:53:49.661657   69851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:53:49.675439   69851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:53:49.689398   69851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:53:49.800929   69851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:53:49.928943   69851 docker.go:233] disabling docker service ...
	I1104 11:53:49.929009   69851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:53:49.942863   69851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:53:49.955262   69851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:53:50.098085   69851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:53:50.245799   69851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:53:50.268584   69851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:53:50.289188   69851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 11:53:50.289261   69851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:53:50.300008   69851 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:53:50.300063   69851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:53:50.312536   69851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:53:50.327219   69851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:53:50.341502   69851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:53:50.352174   69851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:53:50.365815   69851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:53:50.388800   69851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:53:50.402061   69851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:53:50.413702   69851 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 11:53:50.413752   69851 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 11:53:50.429332   69851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:53:50.441603   69851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:53:50.578034   69851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:53:50.693750   69851 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:53:50.693809   69851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:53:50.698346   69851 start.go:563] Will wait 60s for crictl version
	I1104 11:53:50.698399   69851 ssh_runner.go:195] Run: which crictl
	I1104 11:53:50.702722   69851 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:53:50.745315   69851 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:53:50.745390   69851 ssh_runner.go:195] Run: crio --version
	I1104 11:53:50.774215   69851 ssh_runner.go:195] Run: crio --version
	I1104 11:53:50.814887   69851 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 11:53:49.308913   70141 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1104 11:53:49.309075   70141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:53:49.309113   70141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:53:49.325861   70141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I1104 11:53:49.326339   70141 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:53:49.326870   70141 main.go:141] libmachine: Using API Version  1
	I1104 11:53:49.326895   70141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:53:49.327324   70141 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:53:49.327512   70141 main.go:141] libmachine: (auto-528108) Calling .GetMachineName
	I1104 11:53:49.327668   70141 main.go:141] libmachine: (auto-528108) Calling .DriverName
	I1104 11:53:49.327843   70141 start.go:159] libmachine.API.Create for "auto-528108" (driver="kvm2")
	I1104 11:53:49.327869   70141 client.go:168] LocalClient.Create starting
	I1104 11:53:49.327908   70141 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 11:53:49.327944   70141 main.go:141] libmachine: Decoding PEM data...
	I1104 11:53:49.327963   70141 main.go:141] libmachine: Parsing certificate...
	I1104 11:53:49.328034   70141 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 11:53:49.328062   70141 main.go:141] libmachine: Decoding PEM data...
	I1104 11:53:49.328079   70141 main.go:141] libmachine: Parsing certificate...
	I1104 11:53:49.328102   70141 main.go:141] libmachine: Running pre-create checks...
	I1104 11:53:49.328113   70141 main.go:141] libmachine: (auto-528108) Calling .PreCreateCheck
	I1104 11:53:49.328448   70141 main.go:141] libmachine: (auto-528108) Calling .GetConfigRaw
	I1104 11:53:49.328889   70141 main.go:141] libmachine: Creating machine...
	I1104 11:53:49.328906   70141 main.go:141] libmachine: (auto-528108) Calling .Create
	I1104 11:53:49.329017   70141 main.go:141] libmachine: (auto-528108) Creating KVM machine...
	I1104 11:53:49.330215   70141 main.go:141] libmachine: (auto-528108) DBG | found existing default KVM network
	I1104 11:53:49.331215   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:49.331082   70417 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:49:5a} reservation:<nil>}
	I1104 11:53:49.331855   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:49.331778   70417 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:b7:7f} reservation:<nil>}
	I1104 11:53:49.332825   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:49.332744   70417 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dcea0}
	I1104 11:53:49.332958   70141 main.go:141] libmachine: (auto-528108) DBG | created network xml: 
	I1104 11:53:49.332984   70141 main.go:141] libmachine: (auto-528108) DBG | <network>
	I1104 11:53:49.333000   70141 main.go:141] libmachine: (auto-528108) DBG |   <name>mk-auto-528108</name>
	I1104 11:53:49.333015   70141 main.go:141] libmachine: (auto-528108) DBG |   <dns enable='no'/>
	I1104 11:53:49.333034   70141 main.go:141] libmachine: (auto-528108) DBG |   
	I1104 11:53:49.333046   70141 main.go:141] libmachine: (auto-528108) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1104 11:53:49.333064   70141 main.go:141] libmachine: (auto-528108) DBG |     <dhcp>
	I1104 11:53:49.333076   70141 main.go:141] libmachine: (auto-528108) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1104 11:53:49.333084   70141 main.go:141] libmachine: (auto-528108) DBG |     </dhcp>
	I1104 11:53:49.333091   70141 main.go:141] libmachine: (auto-528108) DBG |   </ip>
	I1104 11:53:49.333098   70141 main.go:141] libmachine: (auto-528108) DBG |   
	I1104 11:53:49.333108   70141 main.go:141] libmachine: (auto-528108) DBG | </network>
	I1104 11:53:49.333118   70141 main.go:141] libmachine: (auto-528108) DBG | 
	I1104 11:53:49.339017   70141 main.go:141] libmachine: (auto-528108) DBG | trying to create private KVM network mk-auto-528108 192.168.61.0/24...
	I1104 11:53:49.420046   70141 main.go:141] libmachine: (auto-528108) DBG | private KVM network mk-auto-528108 192.168.61.0/24 created
	I1104 11:53:49.420090   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:49.420016   70417 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:53:49.420108   70141 main.go:141] libmachine: (auto-528108) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108 ...
	I1104 11:53:49.420122   70141 main.go:141] libmachine: (auto-528108) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 11:53:49.420143   70141 main.go:141] libmachine: (auto-528108) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 11:53:49.687126   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:49.686959   70417 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108/id_rsa...
	I1104 11:53:49.824276   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:49.824129   70417 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108/auto-528108.rawdisk...
	I1104 11:53:49.824311   70141 main.go:141] libmachine: (auto-528108) DBG | Writing magic tar header
	I1104 11:53:49.824359   70141 main.go:141] libmachine: (auto-528108) DBG | Writing SSH key tar header
	I1104 11:53:49.824412   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:49.824265   70417 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108 ...
	I1104 11:53:49.824443   70141 main.go:141] libmachine: (auto-528108) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108 (perms=drwx------)
	I1104 11:53:49.824468   70141 main.go:141] libmachine: (auto-528108) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 11:53:49.824479   70141 main.go:141] libmachine: (auto-528108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108
	I1104 11:53:49.824493   70141 main.go:141] libmachine: (auto-528108) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 11:53:49.824506   70141 main.go:141] libmachine: (auto-528108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 11:53:49.824521   70141 main.go:141] libmachine: (auto-528108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:53:49.824530   70141 main.go:141] libmachine: (auto-528108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 11:53:49.824581   70141 main.go:141] libmachine: (auto-528108) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 11:53:49.824605   70141 main.go:141] libmachine: (auto-528108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 11:53:49.824616   70141 main.go:141] libmachine: (auto-528108) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 11:53:49.824625   70141 main.go:141] libmachine: (auto-528108) DBG | Checking permissions on dir: /home/jenkins
	I1104 11:53:49.824636   70141 main.go:141] libmachine: (auto-528108) DBG | Checking permissions on dir: /home
	I1104 11:53:49.824650   70141 main.go:141] libmachine: (auto-528108) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 11:53:49.824660   70141 main.go:141] libmachine: (auto-528108) DBG | Skipping /home - not owner
	I1104 11:53:49.824673   70141 main.go:141] libmachine: (auto-528108) Creating domain...
	I1104 11:53:49.825806   70141 main.go:141] libmachine: (auto-528108) define libvirt domain using xml: 
	I1104 11:53:49.825826   70141 main.go:141] libmachine: (auto-528108) <domain type='kvm'>
	I1104 11:53:49.825853   70141 main.go:141] libmachine: (auto-528108)   <name>auto-528108</name>
	I1104 11:53:49.825886   70141 main.go:141] libmachine: (auto-528108)   <memory unit='MiB'>3072</memory>
	I1104 11:53:49.825898   70141 main.go:141] libmachine: (auto-528108)   <vcpu>2</vcpu>
	I1104 11:53:49.825908   70141 main.go:141] libmachine: (auto-528108)   <features>
	I1104 11:53:49.825918   70141 main.go:141] libmachine: (auto-528108)     <acpi/>
	I1104 11:53:49.825930   70141 main.go:141] libmachine: (auto-528108)     <apic/>
	I1104 11:53:49.825951   70141 main.go:141] libmachine: (auto-528108)     <pae/>
	I1104 11:53:49.825970   70141 main.go:141] libmachine: (auto-528108)     
	I1104 11:53:49.825991   70141 main.go:141] libmachine: (auto-528108)   </features>
	I1104 11:53:49.826011   70141 main.go:141] libmachine: (auto-528108)   <cpu mode='host-passthrough'>
	I1104 11:53:49.826022   70141 main.go:141] libmachine: (auto-528108)   
	I1104 11:53:49.826031   70141 main.go:141] libmachine: (auto-528108)   </cpu>
	I1104 11:53:49.826042   70141 main.go:141] libmachine: (auto-528108)   <os>
	I1104 11:53:49.826052   70141 main.go:141] libmachine: (auto-528108)     <type>hvm</type>
	I1104 11:53:49.826063   70141 main.go:141] libmachine: (auto-528108)     <boot dev='cdrom'/>
	I1104 11:53:49.826073   70141 main.go:141] libmachine: (auto-528108)     <boot dev='hd'/>
	I1104 11:53:49.826085   70141 main.go:141] libmachine: (auto-528108)     <bootmenu enable='no'/>
	I1104 11:53:49.826094   70141 main.go:141] libmachine: (auto-528108)   </os>
	I1104 11:53:49.826104   70141 main.go:141] libmachine: (auto-528108)   <devices>
	I1104 11:53:49.826116   70141 main.go:141] libmachine: (auto-528108)     <disk type='file' device='cdrom'>
	I1104 11:53:49.826133   70141 main.go:141] libmachine: (auto-528108)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108/boot2docker.iso'/>
	I1104 11:53:49.826149   70141 main.go:141] libmachine: (auto-528108)       <target dev='hdc' bus='scsi'/>
	I1104 11:53:49.826161   70141 main.go:141] libmachine: (auto-528108)       <readonly/>
	I1104 11:53:49.826168   70141 main.go:141] libmachine: (auto-528108)     </disk>
	I1104 11:53:49.826177   70141 main.go:141] libmachine: (auto-528108)     <disk type='file' device='disk'>
	I1104 11:53:49.826189   70141 main.go:141] libmachine: (auto-528108)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 11:53:49.826203   70141 main.go:141] libmachine: (auto-528108)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/auto-528108/auto-528108.rawdisk'/>
	I1104 11:53:49.826215   70141 main.go:141] libmachine: (auto-528108)       <target dev='hda' bus='virtio'/>
	I1104 11:53:49.826227   70141 main.go:141] libmachine: (auto-528108)     </disk>
	I1104 11:53:49.826239   70141 main.go:141] libmachine: (auto-528108)     <interface type='network'>
	I1104 11:53:49.826260   70141 main.go:141] libmachine: (auto-528108)       <source network='mk-auto-528108'/>
	I1104 11:53:49.826271   70141 main.go:141] libmachine: (auto-528108)       <model type='virtio'/>
	I1104 11:53:49.826279   70141 main.go:141] libmachine: (auto-528108)     </interface>
	I1104 11:53:49.826289   70141 main.go:141] libmachine: (auto-528108)     <interface type='network'>
	I1104 11:53:49.826297   70141 main.go:141] libmachine: (auto-528108)       <source network='default'/>
	I1104 11:53:49.826310   70141 main.go:141] libmachine: (auto-528108)       <model type='virtio'/>
	I1104 11:53:49.826321   70141 main.go:141] libmachine: (auto-528108)     </interface>
	I1104 11:53:49.826331   70141 main.go:141] libmachine: (auto-528108)     <serial type='pty'>
	I1104 11:53:49.826339   70141 main.go:141] libmachine: (auto-528108)       <target port='0'/>
	I1104 11:53:49.826349   70141 main.go:141] libmachine: (auto-528108)     </serial>
	I1104 11:53:49.826357   70141 main.go:141] libmachine: (auto-528108)     <console type='pty'>
	I1104 11:53:49.826368   70141 main.go:141] libmachine: (auto-528108)       <target type='serial' port='0'/>
	I1104 11:53:49.826376   70141 main.go:141] libmachine: (auto-528108)     </console>
	I1104 11:53:49.826385   70141 main.go:141] libmachine: (auto-528108)     <rng model='virtio'>
	I1104 11:53:49.826394   70141 main.go:141] libmachine: (auto-528108)       <backend model='random'>/dev/random</backend>
	I1104 11:53:49.826403   70141 main.go:141] libmachine: (auto-528108)     </rng>
	I1104 11:53:49.826410   70141 main.go:141] libmachine: (auto-528108)     
	I1104 11:53:49.826416   70141 main.go:141] libmachine: (auto-528108)     
	I1104 11:53:49.826435   70141 main.go:141] libmachine: (auto-528108)   </devices>
	I1104 11:53:49.826444   70141 main.go:141] libmachine: (auto-528108) </domain>
	I1104 11:53:49.826455   70141 main.go:141] libmachine: (auto-528108) 
	I1104 11:53:49.831185   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:06:ff:05 in network default
	I1104 11:53:49.831799   70141 main.go:141] libmachine: (auto-528108) Ensuring networks are active...
	I1104 11:53:49.831819   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:49.832553   70141 main.go:141] libmachine: (auto-528108) Ensuring network default is active
	I1104 11:53:49.832925   70141 main.go:141] libmachine: (auto-528108) Ensuring network mk-auto-528108 is active
	I1104 11:53:49.833485   70141 main.go:141] libmachine: (auto-528108) Getting domain xml...
	I1104 11:53:49.834353   70141 main.go:141] libmachine: (auto-528108) Creating domain...
	I1104 11:53:51.252542   70141 main.go:141] libmachine: (auto-528108) Waiting to get IP...
	I1104 11:53:51.253570   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:51.254049   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:51.254075   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:51.254026   70417 retry.go:31] will retry after 207.220592ms: waiting for machine to come up
	I1104 11:53:51.462936   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:51.463467   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:51.463497   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:51.463392   70417 retry.go:31] will retry after 280.166661ms: waiting for machine to come up
	I1104 11:53:51.745156   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:51.745819   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:51.745876   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:51.745713   70417 retry.go:31] will retry after 442.292094ms: waiting for machine to come up
	I1104 11:53:50.816427   69851 main.go:141] libmachine: (pause-706038) Calling .GetIP
	I1104 11:53:50.819952   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:50.820387   69851 main.go:141] libmachine: (pause-706038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:b4:4d", ip: ""} in network mk-pause-706038: {Iface:virbr1 ExpiryTime:2024-11-04 12:53:39 +0000 UTC Type:0 Mac:52:54:00:d3:b4:4d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:pause-706038 Clientid:01:52:54:00:d3:b4:4d}
	I1104 11:53:50.820408   69851 main.go:141] libmachine: (pause-706038) DBG | domain pause-706038 has defined IP address 192.168.39.132 and MAC address 52:54:00:d3:b4:4d in network mk-pause-706038
	I1104 11:53:50.820629   69851 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 11:53:50.825858   69851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:53:50.840972   69851 kubeadm.go:883] updating cluster {Name:pause-706038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-706038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:53:50.841063   69851 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 11:53:50.841103   69851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:53:50.876473   69851 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 11:53:50.876525   69851 ssh_runner.go:195] Run: which lz4
	I1104 11:53:50.881277   69851 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 11:53:50.886560   69851 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 11:53:50.886584   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 11:53:52.187385   69851 crio.go:462] duration metric: took 1.306176643s to copy over tarball
	I1104 11:53:52.187451   69851 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 11:53:49.503216   69560 api_server.go:52] waiting for apiserver process to appear ...
	I1104 11:53:49.503299   69560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:53:50.003598   69560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:53:50.503361   69560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:53:50.537563   69560 api_server.go:72] duration metric: took 1.034347866s to wait for apiserver process to appear ...
	I1104 11:53:50.537595   69560 api_server.go:88] waiting for apiserver healthz status ...
	I1104 11:53:50.537617   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:53.543390   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 11:53:53.543425   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 11:53:53.543438   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:53.556437   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 11:53:53.556473   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 11:53:54.038472   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:54.046086   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:54.046123   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:54.558648   69851 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.371169341s)
	I1104 11:53:54.558667   69851 crio.go:469] duration metric: took 2.371263358s to extract the tarball
	I1104 11:53:54.558675   69851 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 11:53:54.594951   69851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:53:54.643128   69851 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 11:53:54.643142   69851 cache_images.go:84] Images are preloaded, skipping loading
	I1104 11:53:54.643149   69851 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.31.2 crio true true} ...
	I1104 11:53:54.643274   69851 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-706038 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-706038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:53:54.643355   69851 ssh_runner.go:195] Run: crio config
	I1104 11:53:54.700490   69851 cni.go:84] Creating CNI manager for ""
	I1104 11:53:54.700504   69851 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:53:54.700514   69851 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:53:54.700545   69851 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-706038 NodeName:pause-706038 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 11:53:54.700669   69851 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-706038"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:53:54.700723   69851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 11:53:54.710904   69851 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:53:54.710957   69851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 11:53:54.720524   69851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1104 11:53:54.742889   69851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:53:54.763996   69851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1104 11:53:54.785487   69851 ssh_runner.go:195] Run: grep 192.168.39.132	control-plane.minikube.internal$ /etc/hosts
	I1104 11:53:54.789537   69851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:53:54.806387   69851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:53:54.943708   69851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:53:54.961493   69851 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038 for IP: 192.168.39.132
	I1104 11:53:54.961515   69851 certs.go:194] generating shared ca certs ...
	I1104 11:53:54.961533   69851 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:54.961706   69851 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:53:54.961751   69851 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:53:54.961757   69851 certs.go:256] generating profile certs ...
	I1104 11:53:54.961830   69851 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/client.key
	I1104 11:53:54.961842   69851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/client.crt with IP's: []
	I1104 11:53:55.195203   69851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/client.crt ...
	I1104 11:53:55.195225   69851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/client.crt: {Name:mk410108b16bbc8063d459ecc077f7f5728905d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:55.195464   69851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/client.key ...
	I1104 11:53:55.195477   69851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/client.key: {Name:mk53459c7c05c5fb15ad45592c3b3be2a4840395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:55.195589   69851 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.key.7fb4308f
	I1104 11:53:55.195603   69851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.crt.7fb4308f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132]
	I1104 11:53:55.346621   69851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.crt.7fb4308f ...
	I1104 11:53:55.346634   69851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.crt.7fb4308f: {Name:mk639466e6c82eee8e96e3db47d4e1041dde1255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:55.361987   69851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.key.7fb4308f ...
	I1104 11:53:55.362012   69851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.key.7fb4308f: {Name:mke072c472478a7729d32f6576c039843b169f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:55.362151   69851 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.crt.7fb4308f -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.crt
	I1104 11:53:55.362239   69851 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.key.7fb4308f -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.key
	I1104 11:53:55.362288   69851 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.key
	I1104 11:53:55.362298   69851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.crt with IP's: []
	I1104 11:53:55.443427   69851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.crt ...
	I1104 11:53:55.443440   69851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.crt: {Name:mk03d73964b20a0a27cb7c4912ed884db7f28a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:55.508138   69851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.key ...
	I1104 11:53:55.508170   69851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.key: {Name:mkee9f6d6efa9bae22ddf02c2855fe11a35abc11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:55.508523   69851 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:53:55.508568   69851 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:53:55.508577   69851 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:53:55.508605   69851 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:53:55.508630   69851 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:53:55.508657   69851 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:53:55.508708   69851 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:53:55.509514   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:53:55.543218   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:53:55.570701   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:53:55.598264   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:53:55.623987   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1104 11:53:55.650515   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 11:53:55.680570   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:53:55.712082   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/pause-706038/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:53:55.781452   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:53:55.818857   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:53:55.840829   69851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:53:55.862612   69851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:53:55.879331   69851 ssh_runner.go:195] Run: openssl version
	I1104 11:53:55.884916   69851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:53:55.895298   69851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:53:55.899532   69851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:53:55.899579   69851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:53:55.904977   69851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:53:55.915141   69851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:53:55.926408   69851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:53:55.930607   69851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:53:55.930660   69851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:53:55.936137   69851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:53:55.949444   69851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:53:55.959761   69851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:53:55.964236   69851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:53:55.964283   69851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:53:55.969739   69851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:53:55.979980   69851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:53:55.984992   69851 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 11:53:55.985040   69851 kubeadm.go:392] StartCluster: {Name:pause-706038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-706038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:53:55.985125   69851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:53:55.985203   69851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:53:56.034063   69851 cri.go:89] found id: ""
	I1104 11:53:56.034127   69851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 11:53:56.044656   69851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 11:53:56.057527   69851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 11:53:56.070373   69851 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 11:53:56.070383   69851 kubeadm.go:157] found existing configuration files:
	
	I1104 11:53:56.070442   69851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 11:53:56.081179   69851 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 11:53:56.081235   69851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 11:53:56.091432   69851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 11:53:56.100922   69851 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 11:53:56.100971   69851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 11:53:56.110581   69851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 11:53:56.120292   69851 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 11:53:56.120343   69851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 11:53:56.129820   69851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 11:53:56.138952   69851 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 11:53:56.139002   69851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 11:53:56.148449   69851 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 11:53:56.242282   69851 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1104 11:53:56.242348   69851 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 11:53:56.344336   69851 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 11:53:56.344477   69851 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 11:53:56.344625   69851 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1104 11:53:56.352994   69851 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 11:53:52.189320   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:52.189851   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:52.189878   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:52.189817   70417 retry.go:31] will retry after 457.867011ms: waiting for machine to come up
	I1104 11:53:52.649424   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:52.649945   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:52.649975   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:52.649888   70417 retry.go:31] will retry after 483.560128ms: waiting for machine to come up
	I1104 11:53:53.135564   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:53.136129   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:53.136175   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:53.136089   70417 retry.go:31] will retry after 778.409759ms: waiting for machine to come up
	I1104 11:53:53.915989   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:53.916556   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:53.916578   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:53.916519   70417 retry.go:31] will retry after 1.008264357s: waiting for machine to come up
	I1104 11:53:54.926408   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:54.927095   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:54.927124   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:54.927054   70417 retry.go:31] will retry after 1.278954956s: waiting for machine to come up
	I1104 11:53:56.207711   70141 main.go:141] libmachine: (auto-528108) DBG | domain auto-528108 has defined MAC address 52:54:00:01:5c:27 in network mk-auto-528108
	I1104 11:53:56.208205   70141 main.go:141] libmachine: (auto-528108) DBG | unable to find current IP address of domain auto-528108 in network mk-auto-528108
	I1104 11:53:56.208233   70141 main.go:141] libmachine: (auto-528108) DBG | I1104 11:53:56.208153   70417 retry.go:31] will retry after 1.739508903s: waiting for machine to come up
	I1104 11:53:54.537980   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:54.544633   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:54.544662   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:55.038172   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:55.045701   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:55.045736   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:55.538310   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:55.544016   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:55.544041   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:56.038373   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:56.043363   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:56.043392   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:56.537757   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:56.543487   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:56.543512   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:57.038196   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:57.045747   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:57.045771   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:57.538416   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:57.544364   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 11:53:57.544396   69560 api_server.go:103] status: https://192.168.50.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 11:53:58.037766   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:58.044068   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 200:
	ok
	I1104 11:53:58.052093   69560 api_server.go:141] control plane version: v1.31.2
	I1104 11:53:58.052121   69560 api_server.go:131] duration metric: took 7.514518097s to wait for apiserver health ...
	I1104 11:53:58.052132   69560 cni.go:84] Creating CNI manager for ""
	I1104 11:53:58.052141   69560 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:53:58.054056   69560 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 11:53:56.533386   69851 out.go:235]   - Generating certificates and keys ...
	I1104 11:53:56.533543   69851 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 11:53:56.533658   69851 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 11:53:56.533760   69851 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 11:53:56.583945   69851 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 11:53:56.839897   69851 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 11:53:56.944029   69851 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 11:53:57.013247   69851 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 11:53:57.013488   69851 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost pause-706038] and IPs [192.168.39.132 127.0.0.1 ::1]
	I1104 11:53:57.442619   69851 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 11:53:57.442880   69851 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost pause-706038] and IPs [192.168.39.132 127.0.0.1 ::1]
	I1104 11:53:57.664149   69851 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 11:53:57.825725   69851 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 11:53:57.887315   69851 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 11:53:57.887564   69851 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 11:53:58.191687   69851 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 11:53:58.055465   69560 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 11:53:58.065578   69560 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 11:53:58.084217   69560 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 11:53:58.084319   69560 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1104 11:53:58.084338   69560 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1104 11:53:58.097342   69560 system_pods.go:59] 8 kube-system pods found
	I1104 11:53:58.097388   69560 system_pods.go:61] "coredns-7c65d6cfc9-5dknx" [5c7e71a5-6666-42d3-91f0-5f56e1babf37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 11:53:58.097398   69560 system_pods.go:61] "coredns-7c65d6cfc9-crm9f" [281b3bf8-840e-4fef-8862-6b460d1b2d15] Running
	I1104 11:53:58.097407   69560 system_pods.go:61] "etcd-kubernetes-upgrade-313751" [89a89bc4-35e4-484a-888e-663054f5f94a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 11:53:58.097417   69560 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-313751" [2f9d578b-9ee8-4056-82ee-4d4b0f1a67a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 11:53:58.097431   69560 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-313751" [4d914711-08da-47b2-9e57-48e7a8640ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 11:53:58.097441   69560 system_pods.go:61] "kube-proxy-bkl6l" [35ff4334-3c18-4554-b985-cb63e3ef42af] Running
	I1104 11:53:58.097450   69560 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-313751" [d1135396-7c8a-4796-a45e-a464bf6ec04c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 11:53:58.097455   69560 system_pods.go:61] "storage-provisioner" [18850a76-5d7a-4b4b-af7b-2dd143625fe2] Running
	I1104 11:53:58.097466   69560 system_pods.go:74] duration metric: took 13.224772ms to wait for pod list to return data ...
	I1104 11:53:58.097475   69560 node_conditions.go:102] verifying NodePressure condition ...
	I1104 11:53:58.101009   69560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 11:53:58.101038   69560 node_conditions.go:123] node cpu capacity is 2
	I1104 11:53:58.101051   69560 node_conditions.go:105] duration metric: took 3.568733ms to run NodePressure ...
	I1104 11:53:58.101072   69560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 11:53:58.391777   69560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 11:53:58.406618   69560 ops.go:34] apiserver oom_adj: -16
	I1104 11:53:58.406641   69560 kubeadm.go:597] duration metric: took 31.658092457s to restartPrimaryControlPlane
	I1104 11:53:58.406651   69560 kubeadm.go:394] duration metric: took 31.925678656s to StartCluster
	I1104 11:53:58.406667   69560 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:58.406745   69560 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:53:58.407370   69560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:53:58.407618   69560 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 11:53:58.407680   69560 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 11:53:58.407777   69560 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-313751"
	I1104 11:53:58.407794   69560 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-313751"
	W1104 11:53:58.407802   69560 addons.go:243] addon storage-provisioner should already be in state true
	I1104 11:53:58.407799   69560 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-313751"
	I1104 11:53:58.407829   69560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-313751"
	I1104 11:53:58.407834   69560 host.go:66] Checking if "kubernetes-upgrade-313751" exists ...
	I1104 11:53:58.407885   69560 config.go:182] Loaded profile config "kubernetes-upgrade-313751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:53:58.408242   69560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:53:58.408283   69560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:53:58.408321   69560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:53:58.408288   69560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:53:58.411508   69560 out.go:177] * Verifying Kubernetes components...
	I1104 11:53:58.412930   69560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:53:58.424140   69560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35883
	I1104 11:53:58.424906   69560 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:53:58.425046   69560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I1104 11:53:58.425458   69560 main.go:141] libmachine: Using API Version  1
	I1104 11:53:58.425482   69560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:53:58.425498   69560 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:53:58.425889   69560 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:53:58.426010   69560 main.go:141] libmachine: Using API Version  1
	I1104 11:53:58.426037   69560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:53:58.426362   69560 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:53:58.426417   69560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:53:58.426462   69560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:53:58.426511   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetState
	I1104 11:53:58.429003   69560 kapi.go:59] client config for kubernetes-upgrade-313751: &rest.Config{Host:"https://192.168.50.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kubernetes-upgrade-313751/client.key", CAFile:"/home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1104 11:53:58.429329   69560 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-313751"
	W1104 11:53:58.429347   69560 addons.go:243] addon default-storageclass should already be in state true
	I1104 11:53:58.429373   69560 host.go:66] Checking if "kubernetes-upgrade-313751" exists ...
	I1104 11:53:58.429745   69560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:53:58.429785   69560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:53:58.445679   69560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I1104 11:53:58.446092   69560 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:53:58.446602   69560 main.go:141] libmachine: Using API Version  1
	I1104 11:53:58.446626   69560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:53:58.446994   69560 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:53:58.448417   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetState
	I1104 11:53:58.450131   69560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I1104 11:53:58.450734   69560 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:53:58.451021   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:53:58.451185   69560 main.go:141] libmachine: Using API Version  1
	I1104 11:53:58.451201   69560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:53:58.451584   69560 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:53:58.452146   69560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:53:58.452187   69560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:53:58.453003   69560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:53:58.312771   69851 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1104 11:53:58.391882   69851 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 11:53:58.642546   69851 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 11:53:58.729293   69851 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 11:53:58.729930   69851 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 11:53:58.735093   69851 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 11:53:58.458315   69560 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 11:53:58.458330   69560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 11:53:58.458344   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:53:58.462038   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:53:58.462410   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:52:37 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:53:58.462436   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:53:58.462660   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:53:58.462843   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:53:58.462985   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:53:58.463129   69560 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa Username:docker}
	I1104 11:53:58.472452   69560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I1104 11:53:58.472940   69560 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:53:58.473475   69560 main.go:141] libmachine: Using API Version  1
	I1104 11:53:58.473497   69560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:53:58.473864   69560 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:53:58.474047   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetState
	I1104 11:53:58.476042   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .DriverName
	I1104 11:53:58.476265   69560 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 11:53:58.476283   69560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 11:53:58.476302   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHHostname
	I1104 11:53:58.479707   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:53:58.480112   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:45:f8", ip: ""} in network mk-kubernetes-upgrade-313751: {Iface:virbr2 ExpiryTime:2024-11-04 12:52:37 +0000 UTC Type:0 Mac:52:54:00:3c:45:f8 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-313751 Clientid:01:52:54:00:3c:45:f8}
	I1104 11:53:58.480135   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | domain kubernetes-upgrade-313751 has defined IP address 192.168.50.39 and MAC address 52:54:00:3c:45:f8 in network mk-kubernetes-upgrade-313751
	I1104 11:53:58.480391   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHPort
	I1104 11:53:58.480589   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHKeyPath
	I1104 11:53:58.480748   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .GetSSHUsername
	I1104 11:53:58.480869   69560 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/kubernetes-upgrade-313751/id_rsa Username:docker}
	I1104 11:53:58.622031   69560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:53:58.639528   69560 api_server.go:52] waiting for apiserver process to appear ...
	I1104 11:53:58.639620   69560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:53:58.654367   69560 api_server.go:72] duration metric: took 246.714178ms to wait for apiserver process to appear ...
	I1104 11:53:58.654391   69560 api_server.go:88] waiting for apiserver healthz status ...
	I1104 11:53:58.654413   69560 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I1104 11:53:58.659589   69560 api_server.go:279] https://192.168.50.39:8443/healthz returned 200:
	ok
	I1104 11:53:58.660803   69560 api_server.go:141] control plane version: v1.31.2
	I1104 11:53:58.660828   69560 api_server.go:131] duration metric: took 6.429726ms to wait for apiserver health ...
	I1104 11:53:58.660839   69560 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 11:53:58.668503   69560 system_pods.go:59] 8 kube-system pods found
	I1104 11:53:58.668536   69560 system_pods.go:61] "coredns-7c65d6cfc9-5dknx" [5c7e71a5-6666-42d3-91f0-5f56e1babf37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 11:53:58.668545   69560 system_pods.go:61] "coredns-7c65d6cfc9-crm9f" [281b3bf8-840e-4fef-8862-6b460d1b2d15] Running
	I1104 11:53:58.668556   69560 system_pods.go:61] "etcd-kubernetes-upgrade-313751" [89a89bc4-35e4-484a-888e-663054f5f94a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 11:53:58.668565   69560 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-313751" [2f9d578b-9ee8-4056-82ee-4d4b0f1a67a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 11:53:58.668578   69560 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-313751" [4d914711-08da-47b2-9e57-48e7a8640ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 11:53:58.668593   69560 system_pods.go:61] "kube-proxy-bkl6l" [35ff4334-3c18-4554-b985-cb63e3ef42af] Running
	I1104 11:53:58.668601   69560 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-313751" [d1135396-7c8a-4796-a45e-a464bf6ec04c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 11:53:58.668606   69560 system_pods.go:61] "storage-provisioner" [18850a76-5d7a-4b4b-af7b-2dd143625fe2] Running
	I1104 11:53:58.668617   69560 system_pods.go:74] duration metric: took 7.770805ms to wait for pod list to return data ...
	I1104 11:53:58.668630   69560 kubeadm.go:582] duration metric: took 260.980249ms to wait for: map[apiserver:true system_pods:true]
	I1104 11:53:58.668646   69560 node_conditions.go:102] verifying NodePressure condition ...
	I1104 11:53:58.671738   69560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 11:53:58.671757   69560 node_conditions.go:123] node cpu capacity is 2
	I1104 11:53:58.671768   69560 node_conditions.go:105] duration metric: took 3.116639ms to run NodePressure ...
	I1104 11:53:58.671782   69560 start.go:241] waiting for startup goroutines ...
	I1104 11:53:58.755523   69560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 11:53:58.768165   69560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 11:53:59.629590   69560 main.go:141] libmachine: Making call to close driver server
	I1104 11:53:59.629814   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .Close
	I1104 11:53:59.629784   69560 main.go:141] libmachine: Making call to close driver server
	I1104 11:53:59.629948   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .Close
	I1104 11:53:59.632228   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Closing plugin on server side
	I1104 11:53:59.632249   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Closing plugin on server side
	I1104 11:53:59.632275   69560 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:53:59.632284   69560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:53:59.632293   69560 main.go:141] libmachine: Making call to close driver server
	I1104 11:53:59.632302   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .Close
	I1104 11:53:59.632302   69560 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:53:59.632318   69560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:53:59.632331   69560 main.go:141] libmachine: Making call to close driver server
	I1104 11:53:59.632341   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .Close
	I1104 11:53:59.632585   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) DBG | Closing plugin on server side
	I1104 11:53:59.632614   69560 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:53:59.632620   69560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:53:59.632709   69560 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:53:59.632721   69560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:53:59.645910   69560 main.go:141] libmachine: Making call to close driver server
	I1104 11:53:59.645929   69560 main.go:141] libmachine: (kubernetes-upgrade-313751) Calling .Close
	I1104 11:53:59.646243   69560 main.go:141] libmachine: Successfully made call to close driver server
	I1104 11:53:59.646257   69560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 11:53:59.648991   69560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1104 11:53:59.650200   69560 addons.go:510] duration metric: took 1.24252781s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1104 11:53:59.650231   69560 start.go:246] waiting for cluster config update ...
	I1104 11:53:59.650241   69560 start.go:255] writing updated cluster config ...
	I1104 11:53:59.650436   69560 ssh_runner.go:195] Run: rm -f paused
	I1104 11:53:59.705646   69560 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 11:53:59.707341   69560 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-313751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.528950920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9ad216b-6cfa-4321-8d98-f2fce9091b6b name=/runtime.v1.RuntimeService/Version
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.530160023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b929ec86-7ea9-44dd-9167-2d8589f5ce8e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.530634713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730721240530608782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b929ec86-7ea9-44dd-9167-2d8589f5ce8e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.531302960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5c76da5-a92d-481a-8c00-cdd190283447 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.531405955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5c76da5-a92d-481a-8c00-cdd190283447 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.531838718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed578c95e613806836d19b30a9125ae5951b24ea2c2c1e66feb979ec6906714f,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234718652329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201c300eef4af8bbe228eae89c4fe3319cc7d824c3a6fa70de4d194b2bf455b6,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730721234743492287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fd7ba124ceeee47f3975f9e6e00645774e255117549ceb6cdbd618dc58defd,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234726289678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-
4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571c632ea46b651b194b72a0743f3218dadc2aeef7fc644e48ccb7b85130070d,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt
:1730721230079006452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea7910e66079e673c47fae7b5e8410e63d6b4490076e6e0ba5c054bf5bd86b1,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNN
ING,CreatedAt:1730721230060881064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d59bcb8b846e5fe394cf9b208bf2da4138a039d791b48e53800cf88b7496108,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,C
reatedAt:1730721230065827861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cb3946f70f64505b0cd135b709363fcc73aaff4ad5abf56e772aca68141f2d,PodSandboxId:dc51cac7e4f77c6387ec49a32ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173072
1230041684229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d121202acd2ec01e8a3b75dfcb778815509cb4f9d5e3716bbfdd6fef4610f7e,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f236336360701f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730721218462337341,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206484654482,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206295800317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb
27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730721205316132716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa,PodSandboxId:dc51cac7e4f77c6387ec49a32
ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730721205262436416,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f2363363607
01f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730721205101581445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730721205159457883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730721205046867664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730721204910573346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5c76da5-a92d-481a-8c00-cdd190283447 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.585550744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e5d0b8e-5215-439c-a0f7-6897d02df384 name=/runtime.v1.RuntimeService/Version
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.585645795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e5d0b8e-5215-439c-a0f7-6897d02df384 name=/runtime.v1.RuntimeService/Version
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.586807013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca32685b-b961-49f3-a40f-b5accc34d313 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.587438608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730721240587402502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca32685b-b961-49f3-a40f-b5accc34d313 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.588286892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6bae008-15e3-4fbd-8da7-e70c7332d72c name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.588500437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6bae008-15e3-4fbd-8da7-e70c7332d72c name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.588983171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed578c95e613806836d19b30a9125ae5951b24ea2c2c1e66feb979ec6906714f,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234718652329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201c300eef4af8bbe228eae89c4fe3319cc7d824c3a6fa70de4d194b2bf455b6,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730721234743492287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fd7ba124ceeee47f3975f9e6e00645774e255117549ceb6cdbd618dc58defd,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234726289678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-
4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571c632ea46b651b194b72a0743f3218dadc2aeef7fc644e48ccb7b85130070d,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt
:1730721230079006452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea7910e66079e673c47fae7b5e8410e63d6b4490076e6e0ba5c054bf5bd86b1,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNN
ING,CreatedAt:1730721230060881064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d59bcb8b846e5fe394cf9b208bf2da4138a039d791b48e53800cf88b7496108,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,C
reatedAt:1730721230065827861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cb3946f70f64505b0cd135b709363fcc73aaff4ad5abf56e772aca68141f2d,PodSandboxId:dc51cac7e4f77c6387ec49a32ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173072
1230041684229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d121202acd2ec01e8a3b75dfcb778815509cb4f9d5e3716bbfdd6fef4610f7e,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f236336360701f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730721218462337341,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206484654482,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206295800317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb
27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730721205316132716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa,PodSandboxId:dc51cac7e4f77c6387ec49a32
ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730721205262436416,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f2363363607
01f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730721205101581445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730721205159457883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730721205046867664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730721204910573346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6bae008-15e3-4fbd-8da7-e70c7332d72c name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.596811309Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=09c4c5c0-f3b2-41e8-b28d-e1af6e795625 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.597114030Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-crm9f,Uid:281b3bf8-840e-4fef-8862-6b460d1b2d15,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721205142280901,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-4fef-8862-6b460d1b2d15,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T11:53:05.043821524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5dknx,Uid:5c7e71a5-6666-42d3-91f0-5f56e1babf37,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721204998103183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T11:53:05.062516037Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ed58d61f82643077ee2f025218b4162f81b29145e45fb27626d56a61179fbd6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-313751,Uid:c674394d0118e0e97c762327e9a72066,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721204862937731,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a7206
6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c674394d0118e0e97c762327e9a72066,kubernetes.io/config.seen: 2024-11-04T11:52:54.379592043Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dc51cac7e4f77c6387ec49a32ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-313751,Uid:0962ceb42598928ff34e4204ceb1e987,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721204820728892,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.39:2379,kubernetes.io/config.hash: 0962ceb42598928ff34e4204ceb1e987,kubernetes.io/config.seen: 2024-11-04T11:52:54.448945111Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&Pod
Sandbox{Id:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-313751,Uid:c210134ad5d53c3965705a849b3a40a3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721204775694445,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c210134ad5d53c3965705a849b3a40a3,kubernetes.io/config.seen: 2024-11-04T11:52:54.379593170Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-313751,Uid:a0cda3b26aca9e8065703cbb495327d7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721204735349994,Label
s:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.39:8443,kubernetes.io/config.hash: a0cda3b26aca9e8065703cbb495327d7,kubernetes.io/config.seen: 2024-11-04T11:52:54.379588306Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f236336360701f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:18850a76-5d7a-4b4b-af7b-2dd143625fe2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721204651465759,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-11-04T11:53:05.804160470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&PodSandbo
xMetadata{Name:kube-proxy-bkl6l,Uid:35ff4334-3c18-4554-b985-cb63e3ef42af,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730721204614528349,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T11:53:04.518973906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=09c4c5c0-f3b2-41e8-b28d-e1af6e795625 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.597975473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62cec722-a608-4703-8419-e63f0f51823d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.598059183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62cec722-a608-4703-8419-e63f0f51823d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.598634022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed578c95e613806836d19b30a9125ae5951b24ea2c2c1e66feb979ec6906714f,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234718652329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201c300eef4af8bbe228eae89c4fe3319cc7d824c3a6fa70de4d194b2bf455b6,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730721234743492287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fd7ba124ceeee47f3975f9e6e00645774e255117549ceb6cdbd618dc58defd,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234726289678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-
4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571c632ea46b651b194b72a0743f3218dadc2aeef7fc644e48ccb7b85130070d,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt
:1730721230079006452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea7910e66079e673c47fae7b5e8410e63d6b4490076e6e0ba5c054bf5bd86b1,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNN
ING,CreatedAt:1730721230060881064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d59bcb8b846e5fe394cf9b208bf2da4138a039d791b48e53800cf88b7496108,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,C
reatedAt:1730721230065827861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cb3946f70f64505b0cd135b709363fcc73aaff4ad5abf56e772aca68141f2d,PodSandboxId:dc51cac7e4f77c6387ec49a32ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173072
1230041684229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d121202acd2ec01e8a3b75dfcb778815509cb4f9d5e3716bbfdd6fef4610f7e,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f236336360701f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730721218462337341,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206484654482,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206295800317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb
27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730721205316132716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa,PodSandboxId:dc51cac7e4f77c6387ec49a32
ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730721205262436416,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f2363363607
01f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730721205101581445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730721205159457883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730721205046867664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730721204910573346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62cec722-a608-4703-8419-e63f0f51823d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.643037252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cf6214f-92b2-43f3-9ec9-ff6c85998066 name=/runtime.v1.RuntimeService/Version
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.643221245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cf6214f-92b2-43f3-9ec9-ff6c85998066 name=/runtime.v1.RuntimeService/Version
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.650996769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df8f1ed0-c592-448b-bf9e-c1a4a3f1dfe1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.651577148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730721240651538585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df8f1ed0-c592-448b-bf9e-c1a4a3f1dfe1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.653432514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7426a252-ae75-43ae-8f1f-c549b9a75abe name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.653564084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7426a252-ae75-43ae-8f1f-c549b9a75abe name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 11:54:00 kubernetes-upgrade-313751 crio[2266]: time="2024-11-04 11:54:00.656720839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed578c95e613806836d19b30a9125ae5951b24ea2c2c1e66feb979ec6906714f,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234718652329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201c300eef4af8bbe228eae89c4fe3319cc7d824c3a6fa70de4d194b2bf455b6,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730721234743492287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fd7ba124ceeee47f3975f9e6e00645774e255117549ceb6cdbd618dc58defd,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730721234726289678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-
4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571c632ea46b651b194b72a0743f3218dadc2aeef7fc644e48ccb7b85130070d,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt
:1730721230079006452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea7910e66079e673c47fae7b5e8410e63d6b4490076e6e0ba5c054bf5bd86b1,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNN
ING,CreatedAt:1730721230060881064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d59bcb8b846e5fe394cf9b208bf2da4138a039d791b48e53800cf88b7496108,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,C
reatedAt:1730721230065827861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cb3946f70f64505b0cd135b709363fcc73aaff4ad5abf56e772aca68141f2d,PodSandboxId:dc51cac7e4f77c6387ec49a32ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:173072
1230041684229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d121202acd2ec01e8a3b75dfcb778815509cb4f9d5e3716bbfdd6fef4610f7e,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f236336360701f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730721218462337341,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e,PodSandboxId:9cbee4a3af7d9eb4071035b4549b496818cd194b58b840fdc3c5ceea04701b35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206484654482,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crm9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 281b3bf8-840e-4fef-8862-6b460d1b2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a,PodSandboxId:ba7e3bc80a115538cf885ae67adefbe8fb6ecd479e155e752fa962c27a6a2b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730721206295800317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5dknx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e71a5-6666-42d3-91f0-5f56e1babf37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106,PodSandboxId:8ed58d61f82643077ee2f025218b4162f81b29145e45fb
27626d56a61179fbd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730721205316132716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c674394d0118e0e97c762327e9a72066,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa,PodSandboxId:dc51cac7e4f77c6387ec49a32
ae67f87699d8fe56caf5b84505965df6c706ab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730721205262436416,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0962ceb42598928ff34e4204ceb1e987,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666,PodSandboxId:9edf4a992c36269a022e7b25d424e35c782da99c942015e319f2363363607
01f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730721205101581445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18850a76-5d7a-4b4b-af7b-2dd143625fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639,PodSandboxId:104401e0ac7420fc134fb49460c2f5bf6e1ea1796d1de46061b089a59d6c6678,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730721205159457883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c210134ad5d53c3965705a849b3a40a3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93,PodSandboxId:28c244e43512c058e2e8307e50b18d00eb2cbbfe7d02b13e1a73a0636a2f3164,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730721205046867664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-313751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0cda3b26aca9e8065703cbb495327d7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0,PodSandboxId:d096d2c62106c6dd87bfbcc76a68376cb478b578ef7eeb796bcba25db146036c,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730721204910573346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkl6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ff4334-3c18-4554-b985-cb63e3ef42af,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7426a252-ae75-43ae-8f1f-c549b9a75abe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	201c300eef4af       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   6 seconds ago       Running             kube-proxy                2                   d096d2c62106c       kube-proxy-bkl6l
	c0fd7ba124cee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   6 seconds ago       Running             coredns                   2                   9cbee4a3af7d9       coredns-7c65d6cfc9-crm9f
	ed578c95e6138       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   6 seconds ago       Running             coredns                   2                   ba7e3bc80a115       coredns-7c65d6cfc9-5dknx
	571c632ea46b6       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   10 seconds ago      Running             kube-controller-manager   2                   8ed58d61f8264       kube-controller-manager-kubernetes-upgrade-313751
	2d59bcb8b846e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   10 seconds ago      Running             kube-apiserver            2                   28c244e43512c       kube-apiserver-kubernetes-upgrade-313751
	bea7910e66079       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   10 seconds ago      Running             kube-scheduler            2                   104401e0ac742       kube-scheduler-kubernetes-upgrade-313751
	59cb3946f70f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   10 seconds ago      Running             etcd                      2                   dc51cac7e4f77       etcd-kubernetes-upgrade-313751
	5d121202acd2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago      Running             storage-provisioner       2                   9edf4a992c362       storage-provisioner
	d5e8a9a029975       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   34 seconds ago      Exited              coredns                   1                   9cbee4a3af7d9       coredns-7c65d6cfc9-crm9f
	fe16d79f8e9a5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   34 seconds ago      Exited              coredns                   1                   ba7e3bc80a115       coredns-7c65d6cfc9-5dknx
	b3e0186f4da61       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   35 seconds ago      Exited              kube-controller-manager   1                   8ed58d61f8264       kube-controller-manager-kubernetes-upgrade-313751
	2159621d32bb2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   35 seconds ago      Exited              etcd                      1                   dc51cac7e4f77       etcd-kubernetes-upgrade-313751
	76a056aa61aec       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   35 seconds ago      Exited              kube-scheduler            1                   104401e0ac742       kube-scheduler-kubernetes-upgrade-313751
	d584ec5b35f71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   35 seconds ago      Exited              storage-provisioner       1                   9edf4a992c362       storage-provisioner
	588b1583e7bb2       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   35 seconds ago      Exited              kube-apiserver            1                   28c244e43512c       kube-apiserver-kubernetes-upgrade-313751
	9502fccbcd0a4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   35 seconds ago      Exited              kube-proxy                1                   d096d2c62106c       kube-proxy-bkl6l
	
	
	==> coredns [c0fd7ba124ceeee47f3975f9e6e00645774e255117549ceb6cdbd618dc58defd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e] <==
	
	
	==> coredns [ed578c95e613806836d19b30a9125ae5951b24ea2c2c1e66feb979ec6906714f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-313751
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-313751
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 11:52:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-313751
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 11:53:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 11:53:53 +0000   Mon, 04 Nov 2024 11:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 11:53:53 +0000   Mon, 04 Nov 2024 11:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 11:53:53 +0000   Mon, 04 Nov 2024 11:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 11:53:53 +0000   Mon, 04 Nov 2024 11:52:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.39
	  Hostname:    kubernetes-upgrade-313751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4696bcb64c3420faa9e5dcc246e0fc9
	  System UUID:                a4696bcb-64c3-420f-aa9e-5dcc246e0fc9
	  Boot ID:                    3bdb6370-1015-4ec4-a9ab-adce1d6405be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-5dknx                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     56s
	  kube-system                 coredns-7c65d6cfc9-crm9f                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     56s
	  kube-system                 etcd-kubernetes-upgrade-313751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         57s
	  kube-system                 kube-apiserver-kubernetes-upgrade-313751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-313751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-bkl6l                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-kubernetes-upgrade-313751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 31s                kube-proxy       
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node kubernetes-upgrade-313751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node kubernetes-upgrade-313751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x7 over 67s)  kubelet          Node kubernetes-upgrade-313751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  67s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 67s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           57s                node-controller  Node kubernetes-upgrade-313751 event: Registered Node kubernetes-upgrade-313751 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node kubernetes-upgrade-313751 event: Registered Node kubernetes-upgrade-313751 in Controller
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node kubernetes-upgrade-313751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node kubernetes-upgrade-313751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x7 over 12s)  kubelet          Node kubernetes-upgrade-313751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-313751 event: Registered Node kubernetes-upgrade-313751 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.870280] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.063222] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062735] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.166302] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.147963] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.280085] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +4.180142] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +2.213839] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.064229] kauditd_printk_skb: 158 callbacks suppressed
	[Nov 4 11:53] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.097743] kauditd_printk_skb: 69 callbacks suppressed
	[ +18.245373] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.082811] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.068382] systemd-fstab-generator[2204]: Ignoring "noauto" option for root device
	[  +0.176841] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.150838] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +0.309859] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[  +1.864114] systemd-fstab-generator[2408]: Ignoring "noauto" option for root device
	[  +2.595705] kauditd_printk_skb: 228 callbacks suppressed
	[ +22.134698] systemd-fstab-generator[3622]: Ignoring "noauto" option for root device
	[  +5.694135] kauditd_printk_skb: 42 callbacks suppressed
	[  +3.673664] systemd-fstab-generator[4130]: Ignoring "noauto" option for root device
	[Nov 4 11:54] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa] <==
	{"level":"info","ts":"2024-11-04T11:53:27.808451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-04T11:53:27.808527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a received MsgPreVoteResp from ec29e853f5cd425a at term 2"}
	{"level":"info","ts":"2024-11-04T11:53:27.808573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became candidate at term 3"}
	{"level":"info","ts":"2024-11-04T11:53:27.808612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a received MsgVoteResp from ec29e853f5cd425a at term 3"}
	{"level":"info","ts":"2024-11-04T11:53:27.808650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became leader at term 3"}
	{"level":"info","ts":"2024-11-04T11:53:27.808681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec29e853f5cd425a elected leader ec29e853f5cd425a at term 3"}
	{"level":"info","ts":"2024-11-04T11:53:27.813906Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec29e853f5cd425a","local-member-attributes":"{Name:kubernetes-upgrade-313751 ClientURLs:[https://192.168.50.39:2379]}","request-path":"/0/members/ec29e853f5cd425a/attributes","cluster-id":"16343206fca1ffcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-04T11:53:27.814032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T11:53:27.814129Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T11:53:27.815649Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-04T11:53:27.816806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.39:2379"}
	{"level":"info","ts":"2024-11-04T11:53:27.816908Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-04T11:53:27.818896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-04T11:53:27.817968Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-04T11:53:27.820221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-04T11:53:37.094960Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-11-04T11:53:37.095020Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-313751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.39:2380"],"advertise-client-urls":["https://192.168.50.39:2379"]}
	{"level":"warn","ts":"2024-11-04T11:53:37.095090Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-11-04T11:53:37.095186Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-11-04T11:53:37.114503Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.39:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-11-04T11:53:37.114603Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.39:2379: use of closed network connection"}
	{"level":"info","ts":"2024-11-04T11:53:37.114657Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ec29e853f5cd425a","current-leader-member-id":"ec29e853f5cd425a"}
	{"level":"info","ts":"2024-11-04T11:53:37.117872Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.39:2380"}
	{"level":"info","ts":"2024-11-04T11:53:37.118045Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.39:2380"}
	{"level":"info","ts":"2024-11-04T11:53:37.118074Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-313751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.39:2380"],"advertise-client-urls":["https://192.168.50.39:2379"]}
	
	
	==> etcd [59cb3946f70f64505b0cd135b709363fcc73aaff4ad5abf56e772aca68141f2d] <==
	{"level":"warn","ts":"2024-11-04T11:53:56.292020Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.448369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" ","response":"range_response_count:1 size:5148"}
	{"level":"info","ts":"2024-11-04T11:53:56.292076Z","caller":"traceutil/trace.go:171","msg":"trace[338891446] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx; range_end:; response_count:1; response_revision:544; }","duration":"361.498336ms","start":"2024-11-04T11:53:55.930564Z","end":"2024-11-04T11:53:56.292062Z","steps":["trace[338891446] 'agreement among raft nodes before linearized reading'  (duration: 361.420536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T11:53:56.292103Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T11:53:55.930507Z","time spent":"361.587842ms","remote":"127.0.0.1:56902","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5171,"request content":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" "}
	{"level":"warn","ts":"2024-11-04T11:53:56.292262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.646011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:controller:cloud-provider\" ","response":"range_response_count:1 size:625"}
	{"level":"info","ts":"2024-11-04T11:53:56.292307Z","caller":"traceutil/trace.go:171","msg":"trace[436455787] range","detail":"{range_begin:/registry/roles/kube-system/system:controller:cloud-provider; range_end:; response_count:1; response_revision:544; }","duration":"361.69127ms","start":"2024-11-04T11:53:55.930608Z","end":"2024-11-04T11:53:56.292299Z","steps":["trace[436455787] 'agreement among raft nodes before linearized reading'  (duration: 361.621474ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T11:53:56.292331Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T11:53:55.930584Z","time spent":"361.740536ms","remote":"127.0.0.1:43956","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":648,"request content":"key:\"/registry/roles/kube-system/system:controller:cloud-provider\" "}
	{"level":"warn","ts":"2024-11-04T11:53:56.992683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.378754ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4781295544065487982 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" mod_revision:531 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" value_size:5086 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-04T11:53:56.992858Z","caller":"traceutil/trace.go:171","msg":"trace[1664675370] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:564; }","duration":"687.961565ms","start":"2024-11-04T11:53:56.304881Z","end":"2024-11-04T11:53:56.992843Z","steps":["trace[1664675370] 'read index received'  (duration: 361.284064ms)","trace[1664675370] 'applied index is now lower than readState.Index'  (duration: 326.676079ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-04T11:53:56.992967Z","caller":"traceutil/trace.go:171","msg":"trace[707319959] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"689.967859ms","start":"2024-11-04T11:53:56.302986Z","end":"2024-11-04T11:53:56.992954Z","steps":["trace[707319959] 'process raft request'  (duration: 363.234268ms)","trace[707319959] 'compare'  (duration: 326.240122ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T11:53:56.993001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"688.110734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" ","response":"range_response_count:1 size:709"}
	{"level":"info","ts":"2024-11-04T11:53:56.993102Z","caller":"traceutil/trace.go:171","msg":"trace[94263151] range","detail":"{range_begin:/registry/roles/kube-public/system:controller:bootstrap-signer; range_end:; response_count:1; response_revision:545; }","duration":"688.214494ms","start":"2024-11-04T11:53:56.304878Z","end":"2024-11-04T11:53:56.993093Z","steps":["trace[94263151] 'agreement among raft nodes before linearized reading'  (duration: 688.081558ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T11:53:56.993150Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T11:53:56.304854Z","time spent":"688.289317ms","remote":"127.0.0.1:43956","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":732,"request content":"key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" "}
	{"level":"warn","ts":"2024-11-04T11:53:56.993061Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T11:53:56.302969Z","time spent":"690.052106ms","remote":"127.0.0.1:56902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5145,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" mod_revision:531 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" value_size:5086 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5dknx\" > >"}
	{"level":"warn","ts":"2024-11-04T11:53:57.537400Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.071914ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4781295544065487990 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-crm9f\" mod_revision:532 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-crm9f\" value_size:4904 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-crm9f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-04T11:53:57.537488Z","caller":"traceutil/trace.go:171","msg":"trace[1285896009] linearizableReadLoop","detail":"{readStateIndex:566; appliedIndex:565; }","duration":"533.531892ms","start":"2024-11-04T11:53:57.003943Z","end":"2024-11-04T11:53:57.537475Z","steps":["trace[1285896009] 'read index received'  (duration: 230.110233ms)","trace[1285896009] 'applied index is now lower than readState.Index'  (duration: 303.420496ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T11:53:57.537606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"533.669885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system::leader-locking-kube-scheduler\" ","response":"range_response_count:1 size:808"}
	{"level":"info","ts":"2024-11-04T11:53:57.537657Z","caller":"traceutil/trace.go:171","msg":"trace[2144540200] range","detail":"{range_begin:/registry/rolebindings/kube-system/system::leader-locking-kube-scheduler; range_end:; response_count:1; response_revision:546; }","duration":"533.721017ms","start":"2024-11-04T11:53:57.003927Z","end":"2024-11-04T11:53:57.537648Z","steps":["trace[2144540200] 'agreement among raft nodes before linearized reading'  (duration: 533.592024ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T11:53:57.537686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T11:53:57.003872Z","time spent":"533.805766ms","remote":"127.0.0.1:43966","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":831,"request content":"key:\"/registry/rolebindings/kube-system/system::leader-locking-kube-scheduler\" "}
	{"level":"info","ts":"2024-11-04T11:53:57.537869Z","caller":"traceutil/trace.go:171","msg":"trace[1601543326] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"535.144723ms","start":"2024-11-04T11:53:57.002706Z","end":"2024-11-04T11:53:57.537851Z","steps":["trace[1601543326] 'process raft request'  (duration: 231.513468ms)","trace[1601543326] 'compare'  (duration: 302.833079ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T11:53:57.537979Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T11:53:57.002694Z","time spent":"535.220071ms","remote":"127.0.0.1:56902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4963,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-crm9f\" mod_revision:532 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-crm9f\" value_size:4904 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-crm9f\" > >"}
	{"level":"warn","ts":"2024-11-04T11:53:57.776648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.054525ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4781295544065487999 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.39\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.50.39\" value_size:66 lease:4781295544065487996 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.39\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-04T11:53:57.777289Z","caller":"traceutil/trace.go:171","msg":"trace[45000912] linearizableReadLoop","detail":"{readStateIndex:568; appliedIndex:567; }","duration":"170.020328ms","start":"2024-11-04T11:53:57.607254Z","end":"2024-11-04T11:53:57.777275Z","steps":["trace[45000912] 'read index received'  (duration: 61.261165ms)","trace[45000912] 'applied index is now lower than readState.Index'  (duration: 108.757507ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-04T11:53:57.777345Z","caller":"traceutil/trace.go:171","msg":"trace[865911811] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"173.136377ms","start":"2024-11-04T11:53:57.604165Z","end":"2024-11-04T11:53:57.777301Z","steps":["trace[865911811] 'process raft request'  (duration: 64.376848ms)","trace[865911811] 'compare'  (duration: 107.9479ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T11:53:57.777548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.281884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-11-04T11:53:57.777609Z","caller":"traceutil/trace.go:171","msg":"trace[1418860981] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pvc-protection-controller; range_end:; response_count:1; response_revision:547; }","duration":"170.349797ms","start":"2024-11-04T11:53:57.607252Z","end":"2024-11-04T11:53:57.777602Z","steps":["trace[1418860981] 'agreement among raft nodes before linearized reading'  (duration: 170.151962ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:54:01 up 1 min,  0 users,  load average: 1.91, 0.52, 0.17
	Linux kubernetes-upgrade-313751 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2d59bcb8b846e5fe394cf9b208bf2da4138a039d791b48e53800cf88b7496108] <==
	I1104 11:53:53.591782       1 policy_source.go:224] refreshing policies
	I1104 11:53:53.600011       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1104 11:53:53.603112       1 shared_informer.go:320] Caches are synced for configmaps
	I1104 11:53:53.603629       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1104 11:53:53.604063       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1104 11:53:53.605702       1 aggregator.go:171] initial CRD sync complete...
	I1104 11:53:53.605742       1 autoregister_controller.go:144] Starting autoregister controller
	I1104 11:53:53.605761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1104 11:53:53.605769       1 cache.go:39] Caches are synced for autoregister controller
	I1104 11:53:53.617453       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1104 11:53:53.644168       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1104 11:53:53.651538       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1104 11:53:53.674200       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1104 11:53:53.686809       1 controller.go:615] quota admission added evaluator for: endpoints
	I1104 11:53:53.696752       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1104 11:53:53.696787       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1104 11:53:53.696831       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1104 11:53:53.763874       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1104 11:53:54.520663       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1104 11:53:58.177311       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1104 11:53:58.190473       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1104 11:53:58.235157       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1104 11:53:58.363394       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1104 11:53:58.370010       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1104 11:53:59.469701       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93] <==
	W1104 11:53:46.511385       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.530083       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.552227       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.588648       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.609809       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.683203       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.695162       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.716909       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.722615       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.722951       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.723024       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.723137       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.745887       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.759532       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.794662       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.830025       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.871282       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.954647       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:46.992684       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:47.004064       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:47.015071       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:47.079988       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:47.121910       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:47.126574       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1104 11:53:47.178661       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [571c632ea46b651b194b72a0743f3218dadc2aeef7fc644e48ccb7b85130070d] <==
	I1104 11:53:59.082488       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-313751"
	I1104 11:53:59.082518       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1104 11:53:59.082592       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1104 11:53:59.083917       1 shared_informer.go:320] Caches are synced for persistent volume
	I1104 11:53:59.085247       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1104 11:53:59.086357       1 shared_informer.go:320] Caches are synced for deployment
	I1104 11:53:59.086521       1 shared_informer.go:320] Caches are synced for daemon sets
	I1104 11:53:59.088411       1 shared_informer.go:320] Caches are synced for ephemeral
	I1104 11:53:59.090685       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1104 11:53:59.090793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-313751"
	I1104 11:53:59.134256       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1104 11:53:59.134294       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1104 11:53:59.134258       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1104 11:53:59.135525       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1104 11:53:59.172340       1 shared_informer.go:320] Caches are synced for resource quota
	I1104 11:53:59.183488       1 shared_informer.go:320] Caches are synced for disruption
	I1104 11:53:59.231699       1 shared_informer.go:320] Caches are synced for resource quota
	I1104 11:53:59.257037       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1104 11:53:59.355670       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="295.860335ms"
	I1104 11:53:59.355924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="102.226µs"
	I1104 11:53:59.700783       1 shared_informer.go:320] Caches are synced for garbage collector
	I1104 11:53:59.700900       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1104 11:53:59.716137       1 shared_informer.go:320] Caches are synced for garbage collector
	I1104 11:54:00.666712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.697762ms"
	I1104 11:54:00.667479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="142.606µs"
	
	
	==> kube-controller-manager [b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106] <==
	I1104 11:53:32.743495       1 shared_informer.go:320] Caches are synced for node
	I1104 11:53:32.743597       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1104 11:53:32.743642       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1104 11:53:32.743657       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1104 11:53:32.743663       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1104 11:53:32.743729       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-313751"
	I1104 11:53:32.747711       1 shared_informer.go:320] Caches are synced for namespace
	I1104 11:53:32.747821       1 shared_informer.go:320] Caches are synced for PV protection
	I1104 11:53:32.751165       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1104 11:53:32.753417       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1104 11:53:32.756683       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1104 11:53:32.759071       1 shared_informer.go:320] Caches are synced for GC
	I1104 11:53:32.847204       1 shared_informer.go:320] Caches are synced for resource quota
	I1104 11:53:32.866510       1 shared_informer.go:320] Caches are synced for disruption
	I1104 11:53:32.870729       1 shared_informer.go:320] Caches are synced for taint
	I1104 11:53:32.870909       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1104 11:53:32.871133       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-313751"
	I1104 11:53:32.871188       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1104 11:53:32.889879       1 shared_informer.go:320] Caches are synced for deployment
	I1104 11:53:32.921090       1 shared_informer.go:320] Caches are synced for resource quota
	I1104 11:53:32.923470       1 shared_informer.go:320] Caches are synced for daemon sets
	I1104 11:53:32.987751       1 shared_informer.go:320] Caches are synced for persistent volume
	I1104 11:53:33.375114       1 shared_informer.go:320] Caches are synced for garbage collector
	I1104 11:53:33.390141       1 shared_informer.go:320] Caches are synced for garbage collector
	I1104 11:53:33.390230       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [201c300eef4af8bbe228eae89c4fe3319cc7d824c3a6fa70de4d194b2bf455b6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 11:53:55.334172       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 11:53:55.499447       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.39"]
	E1104 11:53:55.499591       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 11:53:55.537662       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 11:53:55.537701       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 11:53:55.537723       1 server_linux.go:169] "Using iptables Proxier"
	I1104 11:53:55.544075       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 11:53:55.545796       1 server.go:483] "Version info" version="v1.31.2"
	I1104 11:53:55.545836       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 11:53:55.557626       1 config.go:199] "Starting service config controller"
	I1104 11:53:55.557666       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 11:53:55.557693       1 config.go:328] "Starting node config controller"
	I1104 11:53:55.557698       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 11:53:55.557881       1 config.go:105] "Starting endpoint slice config controller"
	I1104 11:53:55.557890       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 11:53:55.658070       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 11:53:55.658082       1 shared_informer.go:320] Caches are synced for node config
	I1104 11:53:55.658095       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 11:53:26.124451       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 11:53:29.546877       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.39"]
	E1104 11:53:29.552517       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 11:53:29.633694       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 11:53:29.633757       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 11:53:29.633793       1 server_linux.go:169] "Using iptables Proxier"
	I1104 11:53:29.640818       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 11:53:29.641052       1 server.go:483] "Version info" version="v1.31.2"
	I1104 11:53:29.641084       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 11:53:29.646681       1 config.go:199] "Starting service config controller"
	I1104 11:53:29.646730       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 11:53:29.646765       1 config.go:105] "Starting endpoint slice config controller"
	I1104 11:53:29.646771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 11:53:29.648286       1 config.go:328] "Starting node config controller"
	I1104 11:53:29.648346       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 11:53:29.747250       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 11:53:29.747345       1 shared_informer.go:320] Caches are synced for service config
	I1104 11:53:29.748834       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639] <==
	E1104 11:53:29.488003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.488220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1104 11:53:29.488324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.488469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1104 11:53:29.488521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.488528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1104 11:53:29.488631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.497555       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 11:53:29.497601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1104 11:53:29.497739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1104 11:53:29.497699       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1104 11:53:29.498042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1104 11:53:29.498123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.498317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1104 11:53:29.498402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.498554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1104 11:53:29.498635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.498835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1104 11:53:29.498915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.499111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1104 11:53:29.499149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1104 11:53:29.506549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1104 11:53:29.506684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1104 11:53:30.559828       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1104 11:53:37.229683       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bea7910e66079e673c47fae7b5e8410e63d6b4490076e6e0ba5c054bf5bd86b1] <==
	I1104 11:53:50.966262       1 serving.go:386] Generated self-signed cert in-memory
	W1104 11:53:53.586665       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 11:53:53.586797       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 11:53:53.586830       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 11:53:53.586863       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 11:53:53.649447       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1104 11:53:53.651442       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 11:53:53.658487       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1104 11:53:53.660843       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 11:53:53.660946       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 11:53:53.661004       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1104 11:53:53.761145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: E1104 11:53:50.003169    3629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-313751?timeout=10s\": dial tcp 192.168.50.39:8443: connect: connection refused" interval="800ms"
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:50.018831    3629 scope.go:117] "RemoveContainer" containerID="2159621d32bb25b5ea4cab2c224fe2dc826744c822ffad58c41ec3377b6afffa"
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:50.019357    3629 scope.go:117] "RemoveContainer" containerID="588b1583e7bb24e8ef0e4e06757a7396cec919aad8e77d9622bfd21710639f93"
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:50.020674    3629 scope.go:117] "RemoveContainer" containerID="b3e0186f4da61a26333b28aed0c0ed3285ac3b2b3747d3fbeee3c49b3e780106"
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:50.022174    3629 scope.go:117] "RemoveContainer" containerID="76a056aa61aec8f1ad3b3012fef3ecadd0184bb75e9779a28b9c3426ef037639"
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:50.237320    3629 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-313751"
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: E1104 11:53:50.240637    3629 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.39:8443: connect: connection refused" node="kubernetes-upgrade-313751"
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: W1104 11:53:50.240721    3629 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-313751&limit=500&resourceVersion=0": dial tcp 192.168.50.39:8443: connect: connection refused
	Nov 04 11:53:50 kubernetes-upgrade-313751 kubelet[3629]: E1104 11:53:50.240807    3629 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-313751&limit=500&resourceVersion=0\": dial tcp 192.168.50.39:8443: connect: connection refused" logger="UnhandledError"
	Nov 04 11:53:51 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:51.042962    3629 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-313751"
	Nov 04 11:53:53 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:53.713037    3629 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-313751"
	Nov 04 11:53:53 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:53.713260    3629 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-313751"
	Nov 04 11:53:53 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:53.713321    3629 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 04 11:53:53 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:53.714418    3629 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.372454    3629 apiserver.go:52] "Watching apiserver"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.392227    3629 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.490010    3629 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/18850a76-5d7a-4b4b-af7b-2dd143625fe2-tmp\") pod \"storage-provisioner\" (UID: \"18850a76-5d7a-4b4b-af7b-2dd143625fe2\") " pod="kube-system/storage-provisioner"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.490103    3629 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35ff4334-3c18-4554-b985-cb63e3ef42af-lib-modules\") pod \"kube-proxy-bkl6l\" (UID: \"35ff4334-3c18-4554-b985-cb63e3ef42af\") " pod="kube-system/kube-proxy-bkl6l"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.490149    3629 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35ff4334-3c18-4554-b985-cb63e3ef42af-xtables-lock\") pod \"kube-proxy-bkl6l\" (UID: \"35ff4334-3c18-4554-b985-cb63e3ef42af\") " pod="kube-system/kube-proxy-bkl6l"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.679327    3629 scope.go:117] "RemoveContainer" containerID="9502fccbcd0a43582eccd1b02585af54c5cfff88d18504df3d6f1ca6fb99abf0"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.679749    3629 scope.go:117] "RemoveContainer" containerID="fe16d79f8e9a59a72cfccb454819e81fb9e91e1b815f87fcc9032562406ffc3a"
	Nov 04 11:53:54 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:53:54.680051    3629 scope.go:117] "RemoveContainer" containerID="d5e8a9a02997599c165a4d750ea354fc7081bd8cb06b911f3614127d8d5a001e"
	Nov 04 11:53:59 kubernetes-upgrade-313751 kubelet[3629]: E1104 11:53:59.535174    3629 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730721239533899879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 11:53:59 kubernetes-upgrade-313751 kubelet[3629]: E1104 11:53:59.535227    3629 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730721239533899879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 11:54:00 kubernetes-upgrade-313751 kubelet[3629]: I1104 11:54:00.629234    3629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [5d121202acd2ec01e8a3b75dfcb778815509cb4f9d5e3716bbfdd6fef4610f7e] <==
	I1104 11:53:38.526846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 11:53:38.536074       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 11:53:38.536129       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1104 11:53:48.403346       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I1104 11:53:53.741730       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 11:53:53.742008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-313751_9e92ff9f-c5af-44c1-b5e4-e70d4c1e27df!
	I1104 11:53:53.742093       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"abd1d72e-ec0d-4227-a0a0-075d0249d110", APIVersion:"v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-313751_9e92ff9f-c5af-44c1-b5e4-e70d4c1e27df became leader
	I1104 11:53:53.842695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-313751_9e92ff9f-c5af-44c1-b5e4-e70d4c1e27df!
	
	
	==> storage-provisioner [d584ec5b35f710435f0f73582627188c4e11ea1ea8bf0d94fadc936a95351666] <==
	I1104 11:53:25.672434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1104 11:53:25.690660       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-313751 -n kubernetes-upgrade-313751
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-313751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-313751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-313751
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-313751: (1.154201961s)
--- FAIL: TestKubernetesUpgrade (391.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (267.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-589257 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-589257 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m27.346045716s)

                                                
                                                
-- stdout --
	* [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:58:21.636277   79643 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:58:21.636396   79643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:58:21.636407   79643 out.go:358] Setting ErrFile to fd 2...
	I1104 11:58:21.636413   79643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:58:21.636665   79643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:58:21.637469   79643 out.go:352] Setting JSON to false
	I1104 11:58:21.639041   79643 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9653,"bootTime":1730711849,"procs":346,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:58:21.639180   79643 start.go:139] virtualization: kvm guest
	I1104 11:58:21.641475   79643 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:58:21.642828   79643 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:58:21.642905   79643 notify.go:220] Checking for updates...
	I1104 11:58:21.645356   79643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:58:21.646559   79643 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:58:21.647884   79643 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:58:21.649148   79643 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:58:21.650395   79643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:58:21.652017   79643 config.go:182] Loaded profile config "bridge-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:58:21.652103   79643 config.go:182] Loaded profile config "calico-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:58:21.652178   79643 config.go:182] Loaded profile config "custom-flannel-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:58:21.652270   79643 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:58:21.696635   79643 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 11:58:21.698111   79643 start.go:297] selected driver: kvm2
	I1104 11:58:21.698127   79643 start.go:901] validating driver "kvm2" against <nil>
	I1104 11:58:21.698141   79643 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:58:21.699353   79643 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:58:21.699435   79643 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 11:58:21.717256   79643 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 11:58:21.717318   79643 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 11:58:21.717546   79643 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 11:58:21.717576   79643 cni.go:84] Creating CNI manager for ""
	I1104 11:58:21.717635   79643 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:58:21.717646   79643 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 11:58:21.717703   79643 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:58:21.717811   79643 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 11:58:21.719641   79643 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 11:58:21.721156   79643 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 11:58:21.721202   79643 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 11:58:21.721240   79643 cache.go:56] Caching tarball of preloaded images
	I1104 11:58:21.721329   79643 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 11:58:21.721349   79643 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 11:58:21.721469   79643 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 11:58:21.721495   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json: {Name:mka4c17b04154a38de57780198c3d5c8047796f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:21.721655   79643 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 11:58:21.721699   79643 start.go:364] duration metric: took 24.117µs to acquireMachinesLock for "old-k8s-version-589257"
	I1104 11:58:21.721722   79643 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 11:58:21.721803   79643 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 11:58:21.723438   79643 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 11:58:21.723605   79643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:58:21.723641   79643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:58:21.743134   79643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I1104 11:58:21.743735   79643 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:58:21.744470   79643 main.go:141] libmachine: Using API Version  1
	I1104 11:58:21.744502   79643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:58:21.744817   79643 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:58:21.745055   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 11:58:21.745261   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:21.745404   79643 start.go:159] libmachine.API.Create for "old-k8s-version-589257" (driver="kvm2")
	I1104 11:58:21.745431   79643 client.go:168] LocalClient.Create starting
	I1104 11:58:21.745471   79643 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 11:58:21.745510   79643 main.go:141] libmachine: Decoding PEM data...
	I1104 11:58:21.745528   79643 main.go:141] libmachine: Parsing certificate...
	I1104 11:58:21.745592   79643 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 11:58:21.745623   79643 main.go:141] libmachine: Decoding PEM data...
	I1104 11:58:21.745649   79643 main.go:141] libmachine: Parsing certificate...
	I1104 11:58:21.745674   79643 main.go:141] libmachine: Running pre-create checks...
	I1104 11:58:21.745684   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .PreCreateCheck
	I1104 11:58:21.746057   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 11:58:21.746528   79643 main.go:141] libmachine: Creating machine...
	I1104 11:58:21.746548   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .Create
	I1104 11:58:21.746719   79643 main.go:141] libmachine: (old-k8s-version-589257) Creating KVM machine...
	I1104 11:58:21.748319   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found existing default KVM network
	I1104 11:58:21.752102   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:21.750202   79680 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:4c:62} reservation:<nil>}
	I1104 11:58:21.752143   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:21.751894   79680 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003621d0}
	I1104 11:58:21.752163   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | created network xml: 
	I1104 11:58:21.752176   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | <network>
	I1104 11:58:21.752187   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |   <name>mk-old-k8s-version-589257</name>
	I1104 11:58:21.752214   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |   <dns enable='no'/>
	I1104 11:58:21.752254   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |   
	I1104 11:58:21.752271   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1104 11:58:21.752282   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |     <dhcp>
	I1104 11:58:21.752301   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1104 11:58:21.752313   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |     </dhcp>
	I1104 11:58:21.752324   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |   </ip>
	I1104 11:58:21.752338   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG |   
	I1104 11:58:21.752350   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | </network>
	I1104 11:58:21.752358   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | 
	I1104 11:58:21.757410   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | trying to create private KVM network mk-old-k8s-version-589257 192.168.50.0/24...
	I1104 11:58:21.854682   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | private KVM network mk-old-k8s-version-589257 192.168.50.0/24 created
	I1104 11:58:21.854709   79643 main.go:141] libmachine: (old-k8s-version-589257) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257 ...
	I1104 11:58:21.854723   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:21.854654   79680 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:58:21.854758   79643 main.go:141] libmachine: (old-k8s-version-589257) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 11:58:21.854777   79643 main.go:141] libmachine: (old-k8s-version-589257) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 11:58:22.164966   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:22.164849   79680 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa...
	I1104 11:58:22.582427   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:22.582349   79680 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/old-k8s-version-589257.rawdisk...
	I1104 11:58:22.582607   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Writing magic tar header
	I1104 11:58:22.582632   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Writing SSH key tar header
	I1104 11:58:22.582653   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:22.582581   79680 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257 ...
	I1104 11:58:22.582703   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257
	I1104 11:58:22.582749   79643 main.go:141] libmachine: (old-k8s-version-589257) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257 (perms=drwx------)
	I1104 11:58:22.582773   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 11:58:22.582784   79643 main.go:141] libmachine: (old-k8s-version-589257) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 11:58:22.582794   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:58:22.582805   79643 main.go:141] libmachine: (old-k8s-version-589257) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 11:58:22.582818   79643 main.go:141] libmachine: (old-k8s-version-589257) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 11:58:22.582833   79643 main.go:141] libmachine: (old-k8s-version-589257) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 11:58:22.582846   79643 main.go:141] libmachine: (old-k8s-version-589257) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 11:58:22.582857   79643 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 11:58:22.582867   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 11:58:22.582876   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 11:58:22.582889   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Checking permissions on dir: /home/jenkins
	I1104 11:58:22.582896   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Checking permissions on dir: /home
	I1104 11:58:22.582906   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Skipping /home - not owner
	I1104 11:58:22.583708   79643 main.go:141] libmachine: (old-k8s-version-589257) define libvirt domain using xml: 
	I1104 11:58:22.583726   79643 main.go:141] libmachine: (old-k8s-version-589257) <domain type='kvm'>
	I1104 11:58:22.583737   79643 main.go:141] libmachine: (old-k8s-version-589257)   <name>old-k8s-version-589257</name>
	I1104 11:58:22.583745   79643 main.go:141] libmachine: (old-k8s-version-589257)   <memory unit='MiB'>2200</memory>
	I1104 11:58:22.583754   79643 main.go:141] libmachine: (old-k8s-version-589257)   <vcpu>2</vcpu>
	I1104 11:58:22.583764   79643 main.go:141] libmachine: (old-k8s-version-589257)   <features>
	I1104 11:58:22.583771   79643 main.go:141] libmachine: (old-k8s-version-589257)     <acpi/>
	I1104 11:58:22.583776   79643 main.go:141] libmachine: (old-k8s-version-589257)     <apic/>
	I1104 11:58:22.583786   79643 main.go:141] libmachine: (old-k8s-version-589257)     <pae/>
	I1104 11:58:22.583792   79643 main.go:141] libmachine: (old-k8s-version-589257)     
	I1104 11:58:22.583798   79643 main.go:141] libmachine: (old-k8s-version-589257)   </features>
	I1104 11:58:22.583804   79643 main.go:141] libmachine: (old-k8s-version-589257)   <cpu mode='host-passthrough'>
	I1104 11:58:22.583811   79643 main.go:141] libmachine: (old-k8s-version-589257)   
	I1104 11:58:22.583817   79643 main.go:141] libmachine: (old-k8s-version-589257)   </cpu>
	I1104 11:58:22.583825   79643 main.go:141] libmachine: (old-k8s-version-589257)   <os>
	I1104 11:58:22.583832   79643 main.go:141] libmachine: (old-k8s-version-589257)     <type>hvm</type>
	I1104 11:58:22.583840   79643 main.go:141] libmachine: (old-k8s-version-589257)     <boot dev='cdrom'/>
	I1104 11:58:22.583847   79643 main.go:141] libmachine: (old-k8s-version-589257)     <boot dev='hd'/>
	I1104 11:58:22.583868   79643 main.go:141] libmachine: (old-k8s-version-589257)     <bootmenu enable='no'/>
	I1104 11:58:22.583873   79643 main.go:141] libmachine: (old-k8s-version-589257)   </os>
	I1104 11:58:22.583880   79643 main.go:141] libmachine: (old-k8s-version-589257)   <devices>
	I1104 11:58:22.583886   79643 main.go:141] libmachine: (old-k8s-version-589257)     <disk type='file' device='cdrom'>
	I1104 11:58:22.583899   79643 main.go:141] libmachine: (old-k8s-version-589257)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/boot2docker.iso'/>
	I1104 11:58:22.583907   79643 main.go:141] libmachine: (old-k8s-version-589257)       <target dev='hdc' bus='scsi'/>
	I1104 11:58:22.583915   79643 main.go:141] libmachine: (old-k8s-version-589257)       <readonly/>
	I1104 11:58:22.583920   79643 main.go:141] libmachine: (old-k8s-version-589257)     </disk>
	I1104 11:58:22.583928   79643 main.go:141] libmachine: (old-k8s-version-589257)     <disk type='file' device='disk'>
	I1104 11:58:22.583937   79643 main.go:141] libmachine: (old-k8s-version-589257)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 11:58:22.583954   79643 main.go:141] libmachine: (old-k8s-version-589257)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/old-k8s-version-589257.rawdisk'/>
	I1104 11:58:22.583961   79643 main.go:141] libmachine: (old-k8s-version-589257)       <target dev='hda' bus='virtio'/>
	I1104 11:58:22.583969   79643 main.go:141] libmachine: (old-k8s-version-589257)     </disk>
	I1104 11:58:22.583977   79643 main.go:141] libmachine: (old-k8s-version-589257)     <interface type='network'>
	I1104 11:58:22.583986   79643 main.go:141] libmachine: (old-k8s-version-589257)       <source network='mk-old-k8s-version-589257'/>
	I1104 11:58:22.583993   79643 main.go:141] libmachine: (old-k8s-version-589257)       <model type='virtio'/>
	I1104 11:58:22.584001   79643 main.go:141] libmachine: (old-k8s-version-589257)     </interface>
	I1104 11:58:22.584008   79643 main.go:141] libmachine: (old-k8s-version-589257)     <interface type='network'>
	I1104 11:58:22.584017   79643 main.go:141] libmachine: (old-k8s-version-589257)       <source network='default'/>
	I1104 11:58:22.584025   79643 main.go:141] libmachine: (old-k8s-version-589257)       <model type='virtio'/>
	I1104 11:58:22.584034   79643 main.go:141] libmachine: (old-k8s-version-589257)     </interface>
	I1104 11:58:22.584041   79643 main.go:141] libmachine: (old-k8s-version-589257)     <serial type='pty'>
	I1104 11:58:22.584049   79643 main.go:141] libmachine: (old-k8s-version-589257)       <target port='0'/>
	I1104 11:58:22.584055   79643 main.go:141] libmachine: (old-k8s-version-589257)     </serial>
	I1104 11:58:22.584063   79643 main.go:141] libmachine: (old-k8s-version-589257)     <console type='pty'>
	I1104 11:58:22.584071   79643 main.go:141] libmachine: (old-k8s-version-589257)       <target type='serial' port='0'/>
	I1104 11:58:22.584079   79643 main.go:141] libmachine: (old-k8s-version-589257)     </console>
	I1104 11:58:22.584086   79643 main.go:141] libmachine: (old-k8s-version-589257)     <rng model='virtio'>
	I1104 11:58:22.584096   79643 main.go:141] libmachine: (old-k8s-version-589257)       <backend model='random'>/dev/random</backend>
	I1104 11:58:22.584102   79643 main.go:141] libmachine: (old-k8s-version-589257)     </rng>
	I1104 11:58:22.584110   79643 main.go:141] libmachine: (old-k8s-version-589257)     
	I1104 11:58:22.584115   79643 main.go:141] libmachine: (old-k8s-version-589257)     
	I1104 11:58:22.584122   79643 main.go:141] libmachine: (old-k8s-version-589257)   </devices>
	I1104 11:58:22.584137   79643 main.go:141] libmachine: (old-k8s-version-589257) </domain>
	I1104 11:58:22.584148   79643 main.go:141] libmachine: (old-k8s-version-589257) 
	I1104 11:58:22.588333   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:54:fc:cf in network default
	I1104 11:58:22.588950   79643 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 11:58:22.588969   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:22.589606   79643 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 11:58:22.589967   79643 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 11:58:22.590520   79643 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 11:58:22.591260   79643 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 11:58:24.067053   79643 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 11:58:24.068045   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:24.068531   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:24.068588   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:24.068509   79680 retry.go:31] will retry after 214.752408ms: waiting for machine to come up
	I1104 11:58:24.285255   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:24.285850   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:24.285874   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:24.285804   79680 retry.go:31] will retry after 269.946034ms: waiting for machine to come up
	I1104 11:58:24.557457   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:24.558024   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:24.558048   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:24.557998   79680 retry.go:31] will retry after 310.056875ms: waiting for machine to come up
	I1104 11:58:24.869404   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:24.869859   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:24.869879   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:24.869825   79680 retry.go:31] will retry after 560.947817ms: waiting for machine to come up
	I1104 11:58:25.432640   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:25.433112   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:25.433133   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:25.433038   79680 retry.go:31] will retry after 723.947043ms: waiting for machine to come up
	I1104 11:58:26.159173   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:26.159805   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:26.159832   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:26.159762   79680 retry.go:31] will retry after 854.634323ms: waiting for machine to come up
	I1104 11:58:27.016379   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:27.016933   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:27.016958   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:27.016828   79680 retry.go:31] will retry after 1.163375104s: waiting for machine to come up
	I1104 11:58:28.389902   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:28.390496   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:28.390521   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:28.390438   79680 retry.go:31] will retry after 1.026815054s: waiting for machine to come up
	I1104 11:58:29.418364   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:29.418853   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:29.418874   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:29.418823   79680 retry.go:31] will retry after 1.832875368s: waiting for machine to come up
	I1104 11:58:31.252935   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:31.253648   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:31.253673   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:31.253605   79680 retry.go:31] will retry after 2.212784842s: waiting for machine to come up
	I1104 11:58:33.468762   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:33.469237   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:33.469259   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:33.469201   79680 retry.go:31] will retry after 2.840658715s: waiting for machine to come up
	I1104 11:58:36.311025   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:36.311575   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:36.311601   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:36.311526   79680 retry.go:31] will retry after 3.493718361s: waiting for machine to come up
	I1104 11:58:39.807161   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:39.807533   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 11:58:39.807554   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 11:58:39.807513   79680 retry.go:31] will retry after 2.88243505s: waiting for machine to come up
	I1104 11:58:42.691469   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:42.692039   79643 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 11:58:42.692064   79643 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 11:58:42.692227   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:42.692511   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257
	I1104 11:58:42.769342   79643 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 11:58:42.769365   79643 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 11:58:42.769514   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 11:58:42.772395   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:42.772894   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:42.772933   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:42.773208   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 11:58:42.773299   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 11:58:42.773332   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 11:58:42.773350   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 11:58:42.773373   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 11:58:42.897022   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 11:58:42.897296   79643 main.go:141] libmachine: (old-k8s-version-589257) KVM machine creation complete!
	I1104 11:58:42.897591   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 11:58:42.898151   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:42.898348   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:42.898514   79643 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1104 11:58:42.898528   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 11:58:42.900036   79643 main.go:141] libmachine: Detecting operating system of created instance...
	I1104 11:58:42.900058   79643 main.go:141] libmachine: Waiting for SSH to be available...
	I1104 11:58:42.900065   79643 main.go:141] libmachine: Getting to WaitForSSH function...
	I1104 11:58:42.900071   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:42.902596   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:42.903010   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:42.903039   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:42.903150   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:42.903319   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:42.903441   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:42.903555   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:42.903741   79643 main.go:141] libmachine: Using SSH client type: native
	I1104 11:58:42.903969   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 11:58:42.903984   79643 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1104 11:58:43.013189   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:58:43.013219   79643 main.go:141] libmachine: Detecting the provisioner...
	I1104 11:58:43.013243   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:43.016416   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.016758   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:43.016789   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.016992   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:43.017179   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.017404   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.017589   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:43.017746   79643 main.go:141] libmachine: Using SSH client type: native
	I1104 11:58:43.017994   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 11:58:43.018011   79643 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1104 11:58:43.121695   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1104 11:58:43.121792   79643 main.go:141] libmachine: found compatible host: buildroot
	I1104 11:58:43.121808   79643 main.go:141] libmachine: Provisioning with buildroot...
	I1104 11:58:43.121818   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 11:58:43.122094   79643 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 11:58:43.122122   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 11:58:43.122352   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:43.125459   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.125868   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:43.125896   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.126061   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:43.126241   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.126423   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.126570   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:43.126735   79643 main.go:141] libmachine: Using SSH client type: native
	I1104 11:58:43.126896   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 11:58:43.126907   79643 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 11:58:43.243391   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 11:58:43.243420   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:43.245813   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.246094   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:43.246118   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.246341   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:43.246521   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.246672   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.246797   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:43.246949   79643 main.go:141] libmachine: Using SSH client type: native
	I1104 11:58:43.247173   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 11:58:43.247204   79643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 11:58:43.357853   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 11:58:43.357895   79643 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 11:58:43.357936   79643 buildroot.go:174] setting up certificates
	I1104 11:58:43.357948   79643 provision.go:84] configureAuth start
	I1104 11:58:43.357965   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 11:58:43.358239   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 11:58:43.361095   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.361494   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:43.361519   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.361721   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:43.364946   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.365356   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:43.365391   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.365508   79643 provision.go:143] copyHostCerts
	I1104 11:58:43.365549   79643 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 11:58:43.365573   79643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 11:58:43.365646   79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 11:58:43.365791   79643 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 11:58:43.365810   79643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 11:58:43.365845   79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 11:58:43.365902   79643 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 11:58:43.365910   79643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 11:58:43.365940   79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 11:58:43.366003   79643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 11:58:43.828018   79643 provision.go:177] copyRemoteCerts
	I1104 11:58:43.828070   79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 11:58:43.828090   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:43.831243   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.831591   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:43.831627   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.831814   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:43.832011   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.832165   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:43.832278   79643 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 11:58:43.924972   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 11:58:43.947726   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 11:58:43.970494   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 11:58:43.992932   79643 provision.go:87] duration metric: took 634.970206ms to configureAuth
	I1104 11:58:43.992960   79643 buildroot.go:189] setting minikube options for container-runtime
	I1104 11:58:43.993137   79643 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 11:58:43.993204   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:43.995765   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.996224   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:43.996253   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:43.996403   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:43.996599   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.996751   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:43.996921   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:43.997086   79643 main.go:141] libmachine: Using SSH client type: native
	I1104 11:58:43.997272   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 11:58:43.997304   79643 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 11:58:44.218756   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 11:58:44.218788   79643 main.go:141] libmachine: Checking connection to Docker...
	I1104 11:58:44.218800   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetURL
	I1104 11:58:44.220091   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using libvirt version 6000000
	I1104 11:58:44.222591   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.222982   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:44.223006   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.223239   79643 main.go:141] libmachine: Docker is up and running!
	I1104 11:58:44.223256   79643 main.go:141] libmachine: Reticulating splines...
	I1104 11:58:44.223264   79643 client.go:171] duration metric: took 22.477823265s to LocalClient.Create
	I1104 11:58:44.223282   79643 start.go:167] duration metric: took 22.477880547s to libmachine.API.Create "old-k8s-version-589257"
	I1104 11:58:44.223294   79643 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 11:58:44.223307   79643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 11:58:44.223331   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:44.223532   79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 11:58:44.223556   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:44.225831   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.226189   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:44.226222   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.226374   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:44.226561   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:44.226720   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:44.226849   79643 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 11:58:44.307961   79643 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 11:58:44.312638   79643 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 11:58:44.312667   79643 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 11:58:44.312741   79643 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 11:58:44.312842   79643 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 11:58:44.312984   79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 11:58:44.324977   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:58:44.349882   79643 start.go:296] duration metric: took 126.576717ms for postStartSetup
	I1104 11:58:44.349932   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 11:58:44.350433   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 11:58:44.352754   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.353134   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:44.353162   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.353456   79643 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 11:58:44.353654   79643 start.go:128] duration metric: took 22.631840409s to createHost
	I1104 11:58:44.353681   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:44.356048   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.356319   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:44.356369   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.356473   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:44.356652   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:44.356805   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:44.356951   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:44.357108   79643 main.go:141] libmachine: Using SSH client type: native
	I1104 11:58:44.357289   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 11:58:44.357309   79643 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 11:58:44.465502   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730721524.438543628
	
	I1104 11:58:44.465529   79643 fix.go:216] guest clock: 1730721524.438543628
	I1104 11:58:44.465541   79643 fix.go:229] Guest: 2024-11-04 11:58:44.438543628 +0000 UTC Remote: 2024-11-04 11:58:44.353667466 +0000 UTC m=+22.773020681 (delta=84.876162ms)
	I1104 11:58:44.465572   79643 fix.go:200] guest clock delta is within tolerance: 84.876162ms
	I1104 11:58:44.465580   79643 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 22.743869914s
	I1104 11:58:44.465623   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:44.465919   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 11:58:44.469077   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.469457   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:44.469506   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.469589   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:44.470058   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:44.470260   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 11:58:44.470336   79643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 11:58:44.470386   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:44.470462   79643 ssh_runner.go:195] Run: cat /version.json
	I1104 11:58:44.470481   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 11:58:44.473291   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.473544   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.473712   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:44.473751   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.473913   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:44.473936   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:44.474085   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:44.474180   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 11:58:44.474247   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:44.474342   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 11:58:44.474402   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:44.474484   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 11:58:44.474571   79643 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 11:58:44.474890   79643 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 11:58:44.576110   79643 ssh_runner.go:195] Run: systemctl --version
	I1104 11:58:44.583560   79643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 11:58:44.753663   79643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 11:58:44.760554   79643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 11:58:44.760620   79643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 11:58:44.784201   79643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 11:58:44.784222   79643 start.go:495] detecting cgroup driver to use...
	I1104 11:58:44.784282   79643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 11:58:44.806779   79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 11:58:44.820902   79643 docker.go:217] disabling cri-docker service (if available) ...
	I1104 11:58:44.820965   79643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 11:58:44.840367   79643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 11:58:44.859683   79643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 11:58:44.999950   79643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 11:58:45.182865   79643 docker.go:233] disabling docker service ...
	I1104 11:58:45.182922   79643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 11:58:45.200439   79643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 11:58:45.214564   79643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 11:58:45.356228   79643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 11:58:45.486859   79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 11:58:45.500538   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 11:58:45.517431   79643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 11:58:45.517506   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:58:45.527242   79643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 11:58:45.527311   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:58:45.536882   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:58:45.546160   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 11:58:45.556168   79643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 11:58:45.566468   79643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 11:58:45.575392   79643 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 11:58:45.575454   79643 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 11:58:45.587527   79643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 11:58:45.596601   79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:58:45.708424   79643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 11:58:45.826403   79643 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 11:58:45.826482   79643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 11:58:45.832407   79643 start.go:563] Will wait 60s for crictl version
	I1104 11:58:45.832465   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:45.836760   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 11:58:45.885153   79643 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 11:58:45.885260   79643 ssh_runner.go:195] Run: crio --version
	I1104 11:58:45.920054   79643 ssh_runner.go:195] Run: crio --version
	I1104 11:58:45.950718   79643 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 11:58:45.951894   79643 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 11:58:45.955065   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:45.955493   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 12:58:37 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 11:58:45.955515   79643 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 11:58:45.955745   79643 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 11:58:45.960074   79643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:58:45.972430   79643 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 11:58:45.972560   79643 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 11:58:45.972619   79643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:58:46.002655   79643 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 11:58:46.002730   79643 ssh_runner.go:195] Run: which lz4
	I1104 11:58:46.006651   79643 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 11:58:46.010495   79643 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 11:58:46.010533   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 11:58:47.484718   79643 crio.go:462] duration metric: took 1.478088337s to copy over tarball
	I1104 11:58:47.484793   79643 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 11:58:50.055595   79643 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570770479s)
	I1104 11:58:50.055626   79643 crio.go:469] duration metric: took 2.570881555s to extract the tarball
	I1104 11:58:50.055636   79643 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 11:58:50.097022   79643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 11:58:50.142607   79643 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 11:58:50.142628   79643 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 11:58:50.142669   79643 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:58:50.142720   79643 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:58:50.142747   79643 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:58:50.142770   79643 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:58:50.142794   79643 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:58:50.142843   79643 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 11:58:50.142849   79643 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 11:58:50.142850   79643 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 11:58:50.144061   79643 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:58:50.144066   79643 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:58:50.144073   79643 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 11:58:50.144077   79643 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 11:58:50.144072   79643 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:58:50.144060   79643 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:58:50.144107   79643 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:58:50.144247   79643 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 11:58:50.302067   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 11:58:50.302280   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 11:58:50.306234   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:58:50.320213   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 11:58:50.321668   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:58:50.322556   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:58:50.328097   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:58:50.423848   79643 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 11:58:50.423879   79643 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 11:58:50.423900   79643 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 11:58:50.423909   79643 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 11:58:50.423954   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:50.423954   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:50.452920   79643 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 11:58:50.452965   79643 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:58:50.453014   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:50.465888   79643 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 11:58:50.465934   79643 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:58:50.465980   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:50.466032   79643 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 11:58:50.466068   79643 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 11:58:50.466108   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:50.474446   79643 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 11:58:50.474488   79643 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:58:50.474502   79643 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 11:58:50.474525   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 11:58:50.474541   79643 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:58:50.474568   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 11:58:50.474581   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:50.474584   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:58:50.474528   79643 ssh_runner.go:195] Run: which crictl
	I1104 11:58:50.474636   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:58:50.474680   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 11:58:50.582532   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 11:58:50.582551   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:58:50.582540   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 11:58:50.597815   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:58:50.597851   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:58:50.597815   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 11:58:50.599250   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:58:50.700657   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 11:58:50.700700   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:58:50.704107   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 11:58:50.741018   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 11:58:50.741122   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:58:50.741203   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 11:58:50.741273   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 11:58:50.872391   79643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 11:58:50.872393   79643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 11:58:50.872464   79643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 11:58:50.872486   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 11:58:50.892369   79643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 11:58:50.892393   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 11:58:50.894042   79643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 11:58:50.933335   79643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 11:58:50.943464   79643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 11:58:51.208829   79643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 11:58:51.351010   79643 cache_images.go:92] duration metric: took 1.208367049s to LoadCachedImages
	W1104 11:58:51.351090   79643 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1104 11:58:51.351103   79643 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 11:58:51.351249   79643 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 11:58:51.351317   79643 ssh_runner.go:195] Run: crio config
	I1104 11:58:51.405903   79643 cni.go:84] Creating CNI manager for ""
	I1104 11:58:51.405929   79643 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 11:58:51.405942   79643 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 11:58:51.405967   79643 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 11:58:51.406114   79643 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 11:58:51.406198   79643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 11:58:51.416923   79643 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 11:58:51.416992   79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 11:58:51.426623   79643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 11:58:51.446414   79643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 11:58:51.463270   79643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 11:58:51.482323   79643 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 11:58:51.486032   79643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 11:58:51.498234   79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 11:58:51.622122   79643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 11:58:51.640450   79643 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 11:58:51.640476   79643 certs.go:194] generating shared ca certs ...
	I1104 11:58:51.640499   79643 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:51.640692   79643 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 11:58:51.640764   79643 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 11:58:51.640780   79643 certs.go:256] generating profile certs ...
	I1104 11:58:51.640880   79643 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 11:58:51.640901   79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.crt with IP's: []
	I1104 11:58:52.131469   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.crt ...
	I1104 11:58:52.131502   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.crt: {Name:mkf018c5b76870ebf49284756eba9bceee67b549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:52.131695   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key ...
	I1104 11:58:52.131711   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key: {Name:mkef8bc070c8a823828e2ffdb74343be48300f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:52.131817   79643 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 11:58:52.131836   79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt.b78bafdb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.180]
	I1104 11:58:52.371539   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt.b78bafdb ...
	I1104 11:58:52.371570   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt.b78bafdb: {Name:mk205b320f554d40cc005fb60f655821e47e6dc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:52.371747   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb ...
	I1104 11:58:52.371765   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb: {Name:mkd7e03d70f0ed4431376e63fb7aab4df9115c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:52.371865   79643 certs.go:381] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt.b78bafdb -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt
	I1104 11:58:52.371953   79643 certs.go:385] copying /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb -> /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key
	I1104 11:58:52.372029   79643 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 11:58:52.372056   79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt with IP's: []
	I1104 11:58:52.619495   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt ...
	I1104 11:58:52.619532   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt: {Name:mk24b91500b71b22738ea45af68ed7cb0d5f9c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:52.619728   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key ...
	I1104 11:58:52.619748   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key: {Name:mkc7017d24f5e65ffb0ce246272bf9e30b2bc91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 11:58:52.620008   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 11:58:52.620060   79643 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 11:58:52.620075   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 11:58:52.620107   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 11:58:52.620147   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 11:58:52.620178   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 11:58:52.620231   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 11:58:52.621011   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 11:58:52.660753   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 11:58:52.686047   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 11:58:52.708852   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 11:58:52.731684   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 11:58:52.754415   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 11:58:52.781679   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 11:58:52.809167   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 11:58:52.833293   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 11:58:52.856970   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 11:58:52.879467   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 11:58:52.902449   79643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 11:58:52.918444   79643 ssh_runner.go:195] Run: openssl version
	I1104 11:58:52.923911   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 11:58:52.934225   79643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:58:52.938791   79643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:58:52.938845   79643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 11:58:52.944637   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 11:58:52.956611   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 11:58:52.968182   79643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 11:58:52.972828   79643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 11:58:52.972888   79643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 11:58:52.978595   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 11:58:52.990863   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 11:58:53.002587   79643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 11:58:53.008274   79643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 11:58:53.008329   79643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 11:58:53.015525   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 11:58:53.029421   79643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 11:58:53.034129   79643 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1104 11:58:53.034176   79643 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 11:58:53.034246   79643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 11:58:53.034280   79643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 11:58:53.075696   79643 cri.go:89] found id: ""
	I1104 11:58:53.075770   79643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 11:58:53.088823   79643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 11:58:53.100965   79643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 11:58:53.110593   79643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 11:58:53.110611   79643 kubeadm.go:157] found existing configuration files:
	
	I1104 11:58:53.110651   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 11:58:53.122334   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 11:58:53.122386   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 11:58:53.134266   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 11:58:53.143620   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 11:58:53.143669   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 11:58:53.152864   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 11:58:53.161403   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 11:58:53.161462   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 11:58:53.170582   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 11:58:53.179067   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 11:58:53.179126   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 11:58:53.187879   79643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 11:58:53.301819   79643 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 11:58:53.301929   79643 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 11:58:53.453634   79643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 11:58:53.453784   79643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 11:58:53.453927   79643 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 11:58:53.629015   79643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 11:58:53.698589   79643 out.go:235]   - Generating certificates and keys ...
	I1104 11:58:53.698706   79643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 11:58:53.698790   79643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 11:58:53.832360   79643 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1104 11:58:53.919789   79643 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1104 11:58:54.113864   79643 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1104 11:58:54.504357   79643 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1104 11:58:54.595087   79643 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1104 11:58:54.595612   79643 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-589257] and IPs [192.168.50.180 127.0.0.1 ::1]
	I1104 11:58:54.713313   79643 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1104 11:58:54.713550   79643 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-589257] and IPs [192.168.50.180 127.0.0.1 ::1]
	I1104 11:58:54.887621   79643 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1104 11:58:55.096421   79643 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1104 11:58:55.366212   79643 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1104 11:58:55.366340   79643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 11:58:55.606612   79643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 11:58:56.061260   79643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 11:58:56.145657   79643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 11:58:56.374317   79643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 11:58:56.394017   79643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 11:58:56.394158   79643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 11:58:56.394224   79643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 11:58:56.532792   79643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 11:58:56.534902   79643 out.go:235]   - Booting up control plane ...
	I1104 11:58:56.535032   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 11:58:56.539619   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 11:58:56.540501   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 11:58:56.541293   79643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 11:58:56.545077   79643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 11:59:36.537189   79643 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 11:59:36.538109   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:59:36.538398   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:59:41.538816   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:59:41.539048   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 11:59:51.537911   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 11:59:51.538158   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:00:11.537014   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:00:11.537338   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:00:51.537599   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:00:51.537884   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:00:51.537914   79643 kubeadm.go:310] 
	I1104 12:00:51.537992   79643 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:00:51.538052   79643 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:00:51.538091   79643 kubeadm.go:310] 
	I1104 12:00:51.538143   79643 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:00:51.538190   79643 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:00:51.538311   79643 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:00:51.538320   79643 kubeadm.go:310] 
	I1104 12:00:51.538457   79643 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:00:51.538501   79643 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:00:51.538546   79643 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:00:51.538554   79643 kubeadm.go:310] 
	I1104 12:00:51.538702   79643 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:00:51.538835   79643 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:00:51.538849   79643 kubeadm.go:310] 
	I1104 12:00:51.538975   79643 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:00:51.539109   79643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:00:51.539210   79643 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:00:51.539305   79643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:00:51.539318   79643 kubeadm.go:310] 
	I1104 12:00:51.539616   79643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:00:51.539744   79643 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:00:51.539846   79643 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1104 12:00:51.539987   79643 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-589257] and IPs [192.168.50.180 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-589257] and IPs [192.168.50.180 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-589257] and IPs [192.168.50.180 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-589257] and IPs [192.168.50.180 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:00:51.540033   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:00:51.975707   79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:00:51.991040   79643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:00:52.000351   79643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:00:52.000373   79643 kubeadm.go:157] found existing configuration files:
	
	I1104 12:00:52.000428   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:00:52.009366   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:00:52.009434   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:00:52.018467   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:00:52.027196   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:00:52.027268   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:00:52.036285   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:00:52.044967   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:00:52.045030   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:00:52.056045   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:00:52.064739   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:00:52.064786   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:00:52.073887   79643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:00:52.271015   79643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:02:48.361822   79643 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:02:48.361895   79643 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:02:48.363459   79643 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:02:48.363527   79643 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:02:48.363618   79643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:02:48.363728   79643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:02:48.363844   79643 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:02:48.363934   79643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:02:48.366551   79643 out.go:235]   - Generating certificates and keys ...
	I1104 12:02:48.366648   79643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:02:48.366724   79643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:02:48.366819   79643 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:02:48.366911   79643 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:02:48.367003   79643 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:02:48.367062   79643 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:02:48.367122   79643 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:02:48.367178   79643 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:02:48.367241   79643 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:02:48.367329   79643 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:02:48.367388   79643 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:02:48.367481   79643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:02:48.367553   79643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:02:48.367613   79643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:02:48.367673   79643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:02:48.367724   79643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:02:48.367816   79643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:02:48.367888   79643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:02:48.367925   79643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:02:48.367980   79643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:02:48.369267   79643 out.go:235]   - Booting up control plane ...
	I1104 12:02:48.369368   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:02:48.369437   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:02:48.369498   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:02:48.369567   79643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:02:48.369706   79643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:02:48.369771   79643 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:02:48.369838   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:02:48.370037   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:02:48.370111   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:02:48.370274   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:02:48.370333   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:02:48.370561   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:02:48.370642   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:02:48.370864   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:02:48.370967   79643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:02:48.371234   79643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:02:48.371248   79643 kubeadm.go:310] 
	I1104 12:02:48.371314   79643 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:02:48.371351   79643 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:02:48.371358   79643 kubeadm.go:310] 
	I1104 12:02:48.371387   79643 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:02:48.371419   79643 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:02:48.371509   79643 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:02:48.371518   79643 kubeadm.go:310] 
	I1104 12:02:48.371596   79643 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:02:48.371629   79643 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:02:48.371656   79643 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:02:48.371662   79643 kubeadm.go:310] 
	I1104 12:02:48.371739   79643 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:02:48.371813   79643 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:02:48.371824   79643 kubeadm.go:310] 
	I1104 12:02:48.371930   79643 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:02:48.372001   79643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:02:48.372083   79643 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:02:48.372186   79643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:02:48.372236   79643 kubeadm.go:310] 
	I1104 12:02:48.372260   79643 kubeadm.go:394] duration metric: took 3m55.338086411s to StartCluster
	I1104 12:02:48.372298   79643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:02:48.372348   79643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:02:48.404797   79643 cri.go:89] found id: ""
	I1104 12:02:48.404826   79643 logs.go:282] 0 containers: []
	W1104 12:02:48.404837   79643 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:02:48.404844   79643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:02:48.404911   79643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:02:48.436242   79643 cri.go:89] found id: ""
	I1104 12:02:48.436267   79643 logs.go:282] 0 containers: []
	W1104 12:02:48.436274   79643 logs.go:284] No container was found matching "etcd"
	I1104 12:02:48.436279   79643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:02:48.436323   79643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:02:48.470662   79643 cri.go:89] found id: ""
	I1104 12:02:48.470687   79643 logs.go:282] 0 containers: []
	W1104 12:02:48.470694   79643 logs.go:284] No container was found matching "coredns"
	I1104 12:02:48.470700   79643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:02:48.470742   79643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:02:48.507795   79643 cri.go:89] found id: ""
	I1104 12:02:48.507822   79643 logs.go:282] 0 containers: []
	W1104 12:02:48.507833   79643 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:02:48.507863   79643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:02:48.507921   79643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:02:48.539567   79643 cri.go:89] found id: ""
	I1104 12:02:48.539592   79643 logs.go:282] 0 containers: []
	W1104 12:02:48.539602   79643 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:02:48.539609   79643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:02:48.539669   79643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:02:48.571544   79643 cri.go:89] found id: ""
	I1104 12:02:48.571572   79643 logs.go:282] 0 containers: []
	W1104 12:02:48.571580   79643 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:02:48.571586   79643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:02:48.571631   79643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:02:48.602779   79643 cri.go:89] found id: ""
	I1104 12:02:48.602802   79643 logs.go:282] 0 containers: []
	W1104 12:02:48.602809   79643 logs.go:284] No container was found matching "kindnet"
	I1104 12:02:48.602817   79643 logs.go:123] Gathering logs for kubelet ...
	I1104 12:02:48.602830   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:02:48.652331   79643 logs.go:123] Gathering logs for dmesg ...
	I1104 12:02:48.652370   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:02:48.665591   79643 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:02:48.665624   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:02:48.766119   79643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:02:48.766138   79643 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:02:48.766151   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:02:48.875323   79643 logs.go:123] Gathering logs for container status ...
	I1104 12:02:48.875362   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1104 12:02:48.910468   79643 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:02:48.910519   79643 out.go:270] * 
	* 
	W1104 12:02:48.910575   79643 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:02:48.910592   79643 out.go:270] * 
	* 
	W1104 12:02:48.911437   79643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:02:48.914546   79643 out.go:201] 
	W1104 12:02:48.915780   79643 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:02:48.915818   79643 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:02:48.915844   79643 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:02:48.917384   79643 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-589257 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
E1104 12:02:49.070157   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 6 (229.130343ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:49.185644   85615 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-589257" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (267.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-908370 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-908370 --alsologtostderr -v=3: exit status 82 (2m0.922663297s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-908370"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 12:00:08.329578   82817 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:00:08.329805   82817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:00:08.329817   82817 out.go:358] Setting ErrFile to fd 2...
	I1104 12:00:08.329827   82817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:00:08.330107   82817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:00:08.330395   82817 out.go:352] Setting JSON to false
	I1104 12:00:08.330489   82817 mustload.go:65] Loading cluster: no-preload-908370
	I1104 12:00:08.330882   82817 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:00:08.330951   82817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/config.json ...
	I1104 12:00:08.331116   82817 mustload.go:65] Loading cluster: no-preload-908370
	I1104 12:00:08.331270   82817 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:00:08.331298   82817 stop.go:39] StopHost: no-preload-908370
	I1104 12:00:08.331675   82817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:00:08.331713   82817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:00:08.349522   82817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I1104 12:00:08.350110   82817 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:00:08.350789   82817 main.go:141] libmachine: Using API Version  1
	I1104 12:00:08.350815   82817 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:00:08.351175   82817 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:00:08.353302   82817 out.go:177] * Stopping node "no-preload-908370"  ...
	I1104 12:00:08.354492   82817 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1104 12:00:08.354519   82817 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:00:08.354836   82817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1104 12:00:08.354862   82817 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:00:08.357996   82817 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:00:08.358394   82817 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 12:59:00 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:00:08.358423   82817 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:00:08.358592   82817 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:00:08.358751   82817 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:00:08.358892   82817 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:00:08.359023   82817 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:00:08.483504   82817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1104 12:00:08.543748   82817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1104 12:00:08.601869   82817 main.go:141] libmachine: Stopping "no-preload-908370"...
	I1104 12:00:08.601894   82817 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:00:08.603632   82817 main.go:141] libmachine: (no-preload-908370) Calling .Stop
	I1104 12:00:08.607134   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 0/120
	I1104 12:00:09.608680   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 1/120
	I1104 12:00:10.610294   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 2/120
	I1104 12:00:11.611657   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 3/120
	I1104 12:00:12.613098   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 4/120
	I1104 12:00:13.614464   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 5/120
	I1104 12:00:14.615648   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 6/120
	I1104 12:00:15.617914   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 7/120
	I1104 12:00:16.619892   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 8/120
	I1104 12:00:17.621497   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 9/120
	I1104 12:00:18.623592   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 10/120
	I1104 12:00:19.625516   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 11/120
	I1104 12:00:20.626919   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 12/120
	I1104 12:00:21.629202   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 13/120
	I1104 12:00:22.630627   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 14/120
	I1104 12:00:23.632467   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 15/120
	I1104 12:00:24.633913   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 16/120
	I1104 12:00:25.635295   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 17/120
	I1104 12:00:26.636898   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 18/120
	I1104 12:00:27.638369   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 19/120
	I1104 12:00:28.640684   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 20/120
	I1104 12:00:29.641811   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 21/120
	I1104 12:00:30.643844   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 22/120
	I1104 12:00:31.645208   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 23/120
	I1104 12:00:32.646580   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 24/120
	I1104 12:00:33.648569   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 25/120
	I1104 12:00:34.650104   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 26/120
	I1104 12:00:35.651506   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 27/120
	I1104 12:00:36.652977   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 28/120
	I1104 12:00:37.654500   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 29/120
	I1104 12:00:38.656453   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 30/120
	I1104 12:00:39.658604   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 31/120
	I1104 12:00:40.659869   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 32/120
	I1104 12:00:41.661377   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 33/120
	I1104 12:00:42.662713   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 34/120
	I1104 12:00:43.664389   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 35/120
	I1104 12:00:44.665781   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 36/120
	I1104 12:00:45.667386   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 37/120
	I1104 12:00:46.668667   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 38/120
	I1104 12:00:48.043542   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 39/120
	I1104 12:00:49.045509   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 40/120
	I1104 12:00:50.047805   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 41/120
	I1104 12:00:51.049811   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 42/120
	I1104 12:00:52.051469   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 43/120
	I1104 12:00:53.052742   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 44/120
	I1104 12:00:54.054340   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 45/120
	I1104 12:00:55.055623   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 46/120
	I1104 12:00:56.056861   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 47/120
	I1104 12:00:57.058226   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 48/120
	I1104 12:00:58.060138   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 49/120
	I1104 12:00:59.061504   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 50/120
	I1104 12:01:00.062808   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 51/120
	I1104 12:01:01.064173   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 52/120
	I1104 12:01:02.065940   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 53/120
	I1104 12:01:03.067550   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 54/120
	I1104 12:01:04.069159   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 55/120
	I1104 12:01:05.070823   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 56/120
	I1104 12:01:06.072373   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 57/120
	I1104 12:01:07.073884   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 58/120
	I1104 12:01:08.076998   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 59/120
	I1104 12:01:09.078721   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 60/120
	I1104 12:01:10.080311   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 61/120
	I1104 12:01:11.082801   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 62/120
	I1104 12:01:12.084065   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 63/120
	I1104 12:01:13.085508   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 64/120
	I1104 12:01:14.088181   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 65/120
	I1104 12:01:15.089905   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 66/120
	I1104 12:01:16.091914   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 67/120
	I1104 12:01:17.093484   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 68/120
	I1104 12:01:18.095348   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 69/120
	I1104 12:01:19.096848   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 70/120
	I1104 12:01:20.098415   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 71/120
	I1104 12:01:21.100083   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 72/120
	I1104 12:01:22.101570   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 73/120
	I1104 12:01:23.103949   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 74/120
	I1104 12:01:24.105703   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 75/120
	I1104 12:01:25.107412   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 76/120
	I1104 12:01:26.109081   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 77/120
	I1104 12:01:27.110617   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 78/120
	I1104 12:01:28.112645   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 79/120
	I1104 12:01:29.114701   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 80/120
	I1104 12:01:30.116346   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 81/120
	I1104 12:01:31.117812   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 82/120
	I1104 12:01:32.119778   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 83/120
	I1104 12:01:33.121651   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 84/120
	I1104 12:01:34.123247   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 85/120
	I1104 12:01:35.124797   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 86/120
	I1104 12:01:36.126281   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 87/120
	I1104 12:01:37.127935   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 88/120
	I1104 12:01:38.129580   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 89/120
	I1104 12:01:39.132032   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 90/120
	I1104 12:01:40.133736   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 91/120
	I1104 12:01:41.135208   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 92/120
	I1104 12:01:42.136677   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 93/120
	I1104 12:01:43.138117   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 94/120
	I1104 12:01:44.139835   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 95/120
	I1104 12:01:45.142096   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 96/120
	I1104 12:01:46.143413   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 97/120
	I1104 12:01:47.144769   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 98/120
	I1104 12:01:48.146382   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 99/120
	I1104 12:01:49.148661   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 100/120
	I1104 12:01:50.150021   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 101/120
	I1104 12:01:51.151379   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 102/120
	I1104 12:01:52.152869   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 103/120
	I1104 12:01:53.154263   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 104/120
	I1104 12:01:54.156362   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 105/120
	I1104 12:01:55.157840   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 106/120
	I1104 12:01:56.159200   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 107/120
	I1104 12:01:57.160702   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 108/120
	I1104 12:01:58.162190   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 109/120
	I1104 12:01:59.164258   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 110/120
	I1104 12:02:00.166495   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 111/120
	I1104 12:02:01.168079   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 112/120
	I1104 12:02:02.169665   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 113/120
	I1104 12:02:03.171824   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 114/120
	I1104 12:02:04.173798   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 115/120
	I1104 12:02:05.175537   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 116/120
	I1104 12:02:06.176863   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 117/120
	I1104 12:02:07.178682   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 118/120
	I1104 12:02:08.180231   82817 main.go:141] libmachine: (no-preload-908370) Waiting for machine to stop 119/120
	I1104 12:02:09.181449   82817 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1104 12:02:09.181510   82817 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1104 12:02:09.183408   82817 out.go:201] 
	W1104 12:02:09.184802   82817 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1104 12:02:09.184817   82817 out.go:270] * 
	* 
	W1104 12:02:09.187517   82817 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:02:09.189219   82817 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-908370 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370
E1104 12:02:12.766268   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370: exit status 3 (18.571424301s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:27.761570   85213 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	E1104 12:02:27.761594   85213 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-908370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-325116 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-325116 --alsologtostderr -v=3: exit status 82 (2m0.631776647s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-325116"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 12:00:19.621788   82976 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:00:19.622368   82976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:00:19.622387   82976 out.go:358] Setting ErrFile to fd 2...
	I1104 12:00:19.622395   82976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:00:19.622827   82976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:00:19.623298   82976 out.go:352] Setting JSON to false
	I1104 12:00:19.623451   82976 mustload.go:65] Loading cluster: embed-certs-325116
	I1104 12:00:19.624119   82976 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:00:19.624223   82976 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/config.json ...
	I1104 12:00:19.624440   82976 mustload.go:65] Loading cluster: embed-certs-325116
	I1104 12:00:19.624568   82976 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:00:19.624603   82976 stop.go:39] StopHost: embed-certs-325116
	I1104 12:00:19.625123   82976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:00:19.625181   82976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:00:19.640468   82976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I1104 12:00:19.641016   82976 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:00:19.641586   82976 main.go:141] libmachine: Using API Version  1
	I1104 12:00:19.641607   82976 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:00:19.642077   82976 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:00:19.644733   82976 out.go:177] * Stopping node "embed-certs-325116"  ...
	I1104 12:00:19.646378   82976 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1104 12:00:19.646413   82976 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:00:19.646715   82976 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1104 12:00:19.646745   82976 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:00:19.649442   82976 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:00:19.649804   82976 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 12:59:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:00:19.649836   82976 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:00:19.649977   82976 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:00:19.650143   82976 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:00:19.650309   82976 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:00:19.650437   82976 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:00:19.746142   82976 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1104 12:00:19.782534   82976 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1104 12:00:19.841897   82976 main.go:141] libmachine: Stopping "embed-certs-325116"...
	I1104 12:00:19.841924   82976 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:00:19.843726   82976 main.go:141] libmachine: (embed-certs-325116) Calling .Stop
	I1104 12:00:19.847021   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 0/120
	I1104 12:00:20.848619   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 1/120
	I1104 12:00:21.850143   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 2/120
	I1104 12:00:22.851645   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 3/120
	I1104 12:00:23.853002   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 4/120
	I1104 12:00:24.854624   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 5/120
	I1104 12:00:25.856037   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 6/120
	I1104 12:00:26.857766   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 7/120
	I1104 12:00:27.859454   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 8/120
	I1104 12:00:28.861123   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 9/120
	I1104 12:00:29.863511   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 10/120
	I1104 12:00:30.864955   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 11/120
	I1104 12:00:31.866405   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 12/120
	I1104 12:00:32.867960   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 13/120
	I1104 12:00:33.869643   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 14/120
	I1104 12:00:34.871662   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 15/120
	I1104 12:00:35.873277   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 16/120
	I1104 12:00:36.875028   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 17/120
	I1104 12:00:37.876366   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 18/120
	I1104 12:00:38.877672   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 19/120
	I1104 12:00:39.879592   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 20/120
	I1104 12:00:40.880936   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 21/120
	I1104 12:00:41.882268   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 22/120
	I1104 12:00:42.883660   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 23/120
	I1104 12:00:43.885774   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 24/120
	I1104 12:00:44.887564   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 25/120
	I1104 12:00:45.888737   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 26/120
	I1104 12:00:46.890108   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 27/120
	I1104 12:00:48.043697   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 28/120
	I1104 12:00:49.045201   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 29/120
	I1104 12:00:50.047342   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 30/120
	I1104 12:00:51.048779   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 31/120
	I1104 12:00:52.050234   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 32/120
	I1104 12:00:53.051924   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 33/120
	I1104 12:00:54.053205   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 34/120
	I1104 12:00:55.054626   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 35/120
	I1104 12:00:56.056118   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 36/120
	I1104 12:00:57.057751   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 37/120
	I1104 12:00:58.059093   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 38/120
	I1104 12:00:59.060820   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 39/120
	I1104 12:01:00.062958   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 40/120
	I1104 12:01:01.064393   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 41/120
	I1104 12:01:02.066026   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 42/120
	I1104 12:01:03.067767   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 43/120
	I1104 12:01:04.069022   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 44/120
	I1104 12:01:05.071206   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 45/120
	I1104 12:01:06.072726   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 46/120
	I1104 12:01:07.074181   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 47/120
	I1104 12:01:08.077172   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 48/120
	I1104 12:01:09.078809   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 49/120
	I1104 12:01:10.080811   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 50/120
	I1104 12:01:11.082621   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 51/120
	I1104 12:01:12.084291   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 52/120
	I1104 12:01:13.085800   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 53/120
	I1104 12:01:14.088428   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 54/120
	I1104 12:01:15.090195   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 55/120
	I1104 12:01:16.092139   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 56/120
	I1104 12:01:17.093521   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 57/120
	I1104 12:01:18.095723   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 58/120
	I1104 12:01:19.097115   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 59/120
	I1104 12:01:20.098998   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 60/120
	I1104 12:01:21.100463   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 61/120
	I1104 12:01:22.102417   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 62/120
	I1104 12:01:23.103776   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 63/120
	I1104 12:01:24.105133   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 64/120
	I1104 12:01:25.107575   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 65/120
	I1104 12:01:26.109087   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 66/120
	I1104 12:01:27.110728   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 67/120
	I1104 12:01:28.112469   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 68/120
	I1104 12:01:29.114241   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 69/120
	I1104 12:01:30.115963   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 70/120
	I1104 12:01:31.117596   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 71/120
	I1104 12:01:32.119341   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 72/120
	I1104 12:01:33.121029   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 73/120
	I1104 12:01:34.122728   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 74/120
	I1104 12:01:35.124704   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 75/120
	I1104 12:01:36.126291   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 76/120
	I1104 12:01:37.128160   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 77/120
	I1104 12:01:38.129720   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 78/120
	I1104 12:01:39.131932   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 79/120
	I1104 12:01:40.134225   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 80/120
	I1104 12:01:41.135627   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 81/120
	I1104 12:01:42.136839   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 82/120
	I1104 12:01:43.138503   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 83/120
	I1104 12:01:44.139957   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 84/120
	I1104 12:01:45.141795   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 85/120
	I1104 12:01:46.143264   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 86/120
	I1104 12:01:47.144661   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 87/120
	I1104 12:01:48.146108   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 88/120
	I1104 12:01:49.148068   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 89/120
	I1104 12:01:50.150157   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 90/120
	I1104 12:01:51.151376   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 91/120
	I1104 12:01:52.153012   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 92/120
	I1104 12:01:53.154533   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 93/120
	I1104 12:01:54.156129   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 94/120
	I1104 12:01:55.157840   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 95/120
	I1104 12:01:56.159209   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 96/120
	I1104 12:01:57.160880   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 97/120
	I1104 12:01:58.162315   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 98/120
	I1104 12:01:59.163913   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 99/120
	I1104 12:02:00.166490   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 100/120
	I1104 12:02:01.167959   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 101/120
	I1104 12:02:02.169549   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 102/120
	I1104 12:02:03.171631   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 103/120
	I1104 12:02:04.173126   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 104/120
	I1104 12:02:05.174852   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 105/120
	I1104 12:02:06.176545   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 106/120
	I1104 12:02:07.177967   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 107/120
	I1104 12:02:08.179745   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 108/120
	I1104 12:02:09.181247   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 109/120
	I1104 12:02:10.183567   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 110/120
	I1104 12:02:11.184949   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 111/120
	I1104 12:02:12.186314   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 112/120
	I1104 12:02:13.187660   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 113/120
	I1104 12:02:14.189068   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 114/120
	I1104 12:02:15.191035   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 115/120
	I1104 12:02:16.192524   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 116/120
	I1104 12:02:17.193997   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 117/120
	I1104 12:02:18.195451   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 118/120
	I1104 12:02:19.196790   82976 main.go:141] libmachine: (embed-certs-325116) Waiting for machine to stop 119/120
	I1104 12:02:20.197661   82976 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1104 12:02:20.197724   82976 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1104 12:02:20.199711   82976 out.go:201] 
	W1104 12:02:20.201079   82976 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1104 12:02:20.201099   82976 out.go:270] * 
	* 
	W1104 12:02:20.203633   82976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:02:20.206358   82976 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-325116 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116
E1104 12:02:24.748070   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116: exit status 3 (18.562730751s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:38.769574   85293 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E1104 12:02:38.769595   85293 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-325116" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-036892 --alsologtostderr -v=3
E1104 12:01:48.901762   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:54.024051   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:04.266148   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-036892 --alsologtostderr -v=3: exit status 82 (2m0.458826661s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-036892"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 12:01:48.863374   85129 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:01:48.863490   85129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:01:48.863500   85129 out.go:358] Setting ErrFile to fd 2...
	I1104 12:01:48.863504   85129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:01:48.863681   85129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:01:48.863885   85129 out.go:352] Setting JSON to false
	I1104 12:01:48.863958   85129 mustload.go:65] Loading cluster: default-k8s-diff-port-036892
	I1104 12:01:48.864318   85129 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:01:48.864380   85129 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:01:48.864541   85129 mustload.go:65] Loading cluster: default-k8s-diff-port-036892
	I1104 12:01:48.864640   85129 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:01:48.864664   85129 stop.go:39] StopHost: default-k8s-diff-port-036892
	I1104 12:01:48.865030   85129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:01:48.865066   85129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:01:48.880375   85129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I1104 12:01:48.880885   85129 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:01:48.881467   85129 main.go:141] libmachine: Using API Version  1
	I1104 12:01:48.881503   85129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:01:48.881834   85129 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:01:48.884180   85129 out.go:177] * Stopping node "default-k8s-diff-port-036892"  ...
	I1104 12:01:48.885472   85129 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1104 12:01:48.885508   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:01:48.885710   85129 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1104 12:01:48.885735   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:01:48.888436   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:01:48.888818   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:01:02 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:01:48.888843   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:01:48.888973   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:01:48.889148   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:01:48.889337   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:01:48.889480   85129 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:01:48.977534   85129 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1104 12:01:49.032327   85129 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1104 12:01:49.067538   85129 main.go:141] libmachine: Stopping "default-k8s-diff-port-036892"...
	I1104 12:01:49.067566   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:01:49.069212   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Stop
	I1104 12:01:49.072882   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 0/120
	I1104 12:01:50.074180   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 1/120
	I1104 12:01:51.075635   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 2/120
	I1104 12:01:52.077287   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 3/120
	I1104 12:01:53.078808   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 4/120
	I1104 12:01:54.080793   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 5/120
	I1104 12:01:55.082326   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 6/120
	I1104 12:01:56.083938   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 7/120
	I1104 12:01:57.085459   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 8/120
	I1104 12:01:58.086941   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 9/120
	I1104 12:01:59.089301   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 10/120
	I1104 12:02:00.090815   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 11/120
	I1104 12:02:01.092231   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 12/120
	I1104 12:02:02.093777   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 13/120
	I1104 12:02:03.095195   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 14/120
	I1104 12:02:04.097346   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 15/120
	I1104 12:02:05.098736   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 16/120
	I1104 12:02:06.100233   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 17/120
	I1104 12:02:07.101767   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 18/120
	I1104 12:02:08.103731   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 19/120
	I1104 12:02:09.106060   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 20/120
	I1104 12:02:10.107583   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 21/120
	I1104 12:02:11.108807   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 22/120
	I1104 12:02:12.110240   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 23/120
	I1104 12:02:13.112425   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 24/120
	I1104 12:02:14.114379   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 25/120
	I1104 12:02:15.116029   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 26/120
	I1104 12:02:16.117553   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 27/120
	I1104 12:02:17.118986   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 28/120
	I1104 12:02:18.120508   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 29/120
	I1104 12:02:19.123062   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 30/120
	I1104 12:02:20.124410   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 31/120
	I1104 12:02:21.126126   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 32/120
	I1104 12:02:22.127562   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 33/120
	I1104 12:02:23.129239   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 34/120
	I1104 12:02:24.131208   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 35/120
	I1104 12:02:25.132597   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 36/120
	I1104 12:02:26.134051   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 37/120
	I1104 12:02:27.135471   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 38/120
	I1104 12:02:28.137059   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 39/120
	I1104 12:02:29.139465   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 40/120
	I1104 12:02:30.140712   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 41/120
	I1104 12:02:31.142664   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 42/120
	I1104 12:02:32.144093   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 43/120
	I1104 12:02:33.145653   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 44/120
	I1104 12:02:34.147770   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 45/120
	I1104 12:02:35.149143   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 46/120
	I1104 12:02:36.151438   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 47/120
	I1104 12:02:37.152812   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 48/120
	I1104 12:02:38.154306   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 49/120
	I1104 12:02:39.156482   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 50/120
	I1104 12:02:40.157805   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 51/120
	I1104 12:02:41.159565   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 52/120
	I1104 12:02:42.161021   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 53/120
	I1104 12:02:43.162479   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 54/120
	I1104 12:02:44.164711   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 55/120
	I1104 12:02:45.166096   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 56/120
	I1104 12:02:46.167531   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 57/120
	I1104 12:02:47.168695   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 58/120
	I1104 12:02:48.170236   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 59/120
	I1104 12:02:49.172174   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 60/120
	I1104 12:02:50.173715   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 61/120
	I1104 12:02:51.175795   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 62/120
	I1104 12:02:52.177117   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 63/120
	I1104 12:02:53.178448   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 64/120
	I1104 12:02:54.180403   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 65/120
	I1104 12:02:55.181734   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 66/120
	I1104 12:02:56.183075   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 67/120
	I1104 12:02:57.184482   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 68/120
	I1104 12:02:58.185563   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 69/120
	I1104 12:02:59.187780   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 70/120
	I1104 12:03:00.189205   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 71/120
	I1104 12:03:01.190591   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 72/120
	I1104 12:03:02.191955   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 73/120
	I1104 12:03:03.193747   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 74/120
	I1104 12:03:04.195917   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 75/120
	I1104 12:03:05.197384   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 76/120
	I1104 12:03:06.198561   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 77/120
	I1104 12:03:07.200394   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 78/120
	I1104 12:03:08.201879   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 79/120
	I1104 12:03:09.204194   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 80/120
	I1104 12:03:10.205692   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 81/120
	I1104 12:03:11.207512   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 82/120
	I1104 12:03:12.209168   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 83/120
	I1104 12:03:13.210640   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 84/120
	I1104 12:03:14.212813   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 85/120
	I1104 12:03:15.214269   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 86/120
	I1104 12:03:16.215639   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 87/120
	I1104 12:03:17.217124   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 88/120
	I1104 12:03:18.218605   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 89/120
	I1104 12:03:19.220821   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 90/120
	I1104 12:03:20.222369   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 91/120
	I1104 12:03:21.223598   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 92/120
	I1104 12:03:22.225176   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 93/120
	I1104 12:03:23.226766   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 94/120
	I1104 12:03:24.229106   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 95/120
	I1104 12:03:25.230705   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 96/120
	I1104 12:03:26.232440   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 97/120
	I1104 12:03:27.234110   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 98/120
	I1104 12:03:28.235553   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 99/120
	I1104 12:03:29.237853   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 100/120
	I1104 12:03:30.239537   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 101/120
	I1104 12:03:31.241060   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 102/120
	I1104 12:03:32.242651   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 103/120
	I1104 12:03:33.244125   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 104/120
	I1104 12:03:34.246286   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 105/120
	I1104 12:03:35.247521   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 106/120
	I1104 12:03:36.248937   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 107/120
	I1104 12:03:37.250387   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 108/120
	I1104 12:03:38.252092   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 109/120
	I1104 12:03:39.254390   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 110/120
	I1104 12:03:40.255820   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 111/120
	I1104 12:03:41.257424   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 112/120
	I1104 12:03:42.258777   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 113/120
	I1104 12:03:43.260315   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 114/120
	I1104 12:03:44.262243   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 115/120
	I1104 12:03:45.264057   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 116/120
	I1104 12:03:46.265590   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 117/120
	I1104 12:03:47.267372   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 118/120
	I1104 12:03:48.268872   85129 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for machine to stop 119/120
	I1104 12:03:49.269987   85129 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1104 12:03:49.270049   85129 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1104 12:03:49.272162   85129 out.go:201] 
	W1104 12:03:49.273415   85129 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1104 12:03:49.273431   85129 out.go:270] * 
	* 
	W1104 12:03:49.275935   85129 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:03:49.277017   85129 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-036892 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
E1104 12:04:00.995654   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892: exit status 3 (18.578953706s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:04:07.857598   86015 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	E1104 12:04:07.857617   86015 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-036892" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370: exit status 3 (3.167854979s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:30.929604   85341 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	E1104 12:02:30.929624   85341 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-908370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1104 12:02:31.397375   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-908370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152921089s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-908370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370: exit status 3 (3.062829497s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:40.145622   85421 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	E1104 12:02:40.145645   85421 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-908370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116: exit status 3 (3.167224624s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:41.937584   85453 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E1104 12:02:41.937608   85453 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-325116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1104 12:02:46.501075   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:46.507467   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:46.518786   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:46.540334   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:46.581762   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:46.663230   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:46.824811   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:47.146524   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:47.788051   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-325116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15254736s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-325116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116: exit status 3 (3.06314684s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:51.153625   85583 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E1104 12:02:51.153649   85583 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-325116" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-589257 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-589257 create -f testdata/busybox.yaml: exit status 1 (43.300803ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-589257" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-589257 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 6 (222.652101ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:49.452034   85654 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-589257" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 6 (212.088342ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:02:49.663625   85684 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-589257" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (85.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-589257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-589257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m25.226490853s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-589257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-589257 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-589257 describe deploy/metrics-server -n kube-system: exit status 1 (42.758898ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-589257" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-589257 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 6 (225.70526ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:04:15.159813   86203 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-589257" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (85.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
E1104 12:04:08.439650   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892: exit status 3 (3.167810623s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:04:11.025615   86110 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	E1104 12:04:11.025632   86110 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-036892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-036892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152331518s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-036892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892: exit status 3 (3.063332614s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1104 12:04:20.241547   86255 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	E1104 12:04:20.241565   86255 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-036892" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (724.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-589257 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1104 12:04:23.206806   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:04:27.632296   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:04:41.957005   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:04:47.409366   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:04:47.536883   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:14.953441   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:14.959815   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:14.971164   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:14.992503   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:15.033883   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:15.115350   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:15.238766   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:15.277178   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:15.598911   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:16.240273   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:17.522521   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:20.084306   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:25.206429   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:30.361074   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:35.448389   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:45.128819   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:50.828463   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:05:55.929850   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:06:03.878408   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:06:18.530024   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:06:33.165589   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:06:36.891771   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:06:43.768901   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:07:11.474462   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:07:46.500921   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:07:58.813575   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:08:01.267926   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:08:14.203492   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:08:20.019278   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:08:28.970356   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:08:47.720085   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:09:47.409675   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:09:47.537358   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:10:14.953001   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:10:42.654859   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:10:50.828320   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:11:33.165015   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:11:43.769863   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-589257 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m1.327757069s)

                                                
                                                
-- stdout --
	* [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 12:04:21.684777   86402 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:04:21.684885   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.684893   86402 out.go:358] Setting ErrFile to fd 2...
	I1104 12:04:21.684897   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.685085   86402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:04:21.685618   86402 out.go:352] Setting JSON to false
	I1104 12:04:21.686501   86402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10013,"bootTime":1730711849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:04:21.686603   86402 start.go:139] virtualization: kvm guest
	I1104 12:04:21.688652   86402 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:04:21.690121   86402 notify.go:220] Checking for updates...
	I1104 12:04:21.690173   86402 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:04:21.691712   86402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:04:21.693100   86402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:04:21.694334   86402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:04:21.695431   86402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:04:21.696680   86402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:04:21.698271   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:04:21.698697   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.698738   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.713382   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I1104 12:04:21.713861   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.714357   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.714378   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.714696   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.714872   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.716711   86402 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 12:04:21.718136   86402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:04:21.718573   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.718617   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.733074   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1104 12:04:21.733525   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.733939   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.733955   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.734252   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.734410   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.770049   86402 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:04:21.771735   86402 start.go:297] selected driver: kvm2
	I1104 12:04:21.771748   86402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.771851   86402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:04:21.772615   86402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.772709   86402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:04:21.787662   86402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:04:21.788158   86402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:04:21.788201   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:04:21.788238   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:04:21.788282   86402 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.788422   86402 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.790364   86402 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 12:04:21.791568   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:04:21.791599   86402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:21.791608   86402 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:21.791668   86402 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:21.791678   86402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 12:04:21.791755   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:04:21.791918   86402 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:07:52.573915   86402 start.go:364] duration metric: took 3m30.781955626s to acquireMachinesLock for "old-k8s-version-589257"
	I1104 12:07:52.573984   86402 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:52.573996   86402 fix.go:54] fixHost starting: 
	I1104 12:07:52.574443   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:52.574500   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:52.594310   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1104 12:07:52.594822   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:52.595317   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:07:52.595347   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:52.595727   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:52.595924   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:07:52.596093   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 12:07:52.597578   86402 fix.go:112] recreateIfNeeded on old-k8s-version-589257: state=Stopped err=<nil>
	I1104 12:07:52.597615   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	W1104 12:07:52.597752   86402 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:52.599659   86402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	I1104 12:07:52.600997   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .Start
	I1104 12:07:52.601180   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 12:07:52.602131   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 12:07:52.602560   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 12:07:52.603030   86402 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 12:07:52.603859   86402 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 12:07:53.855214   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 12:07:53.856063   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:53.856539   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:53.856594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:53.856513   87367 retry.go:31] will retry after 268.725451ms: waiting for machine to come up
	I1104 12:07:54.127094   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.127584   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.127612   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.127560   87367 retry.go:31] will retry after 239.665225ms: waiting for machine to come up
	I1104 12:07:54.369139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.369777   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.369798   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.369710   87367 retry.go:31] will retry after 386.228261ms: waiting for machine to come up
	I1104 12:07:54.757191   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.757637   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.757665   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.757591   87367 retry.go:31] will retry after 571.244573ms: waiting for machine to come up
	I1104 12:07:55.330439   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.331187   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.331216   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.331144   87367 retry.go:31] will retry after 539.328185ms: waiting for machine to come up
	I1104 12:07:55.871869   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.872373   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.872403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.872335   87367 retry.go:31] will retry after 879.285089ms: waiting for machine to come up
	I1104 12:07:56.752983   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:56.753577   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:56.753613   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:56.753542   87367 retry.go:31] will retry after 1.081359862s: waiting for machine to come up
	I1104 12:07:57.836518   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:57.836963   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:57.836990   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:57.836914   87367 retry.go:31] will retry after 1.149571097s: waiting for machine to come up
	I1104 12:07:58.987694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:58.988125   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:58.988152   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:58.988077   87367 retry.go:31] will retry after 1.247311806s: waiting for machine to come up
	I1104 12:08:00.237634   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:00.238147   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:00.238217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:00.238109   87367 retry.go:31] will retry after 2.058125339s: waiting for machine to come up
	I1104 12:08:02.298631   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:02.299046   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:02.299079   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:02.298978   87367 retry.go:31] will retry after 2.664667046s: waiting for machine to come up
	I1104 12:08:04.964700   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:04.965185   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:04.965209   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:04.965135   87367 retry.go:31] will retry after 2.716802395s: waiting for machine to come up
	I1104 12:08:07.683582   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:07.684143   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:07.684172   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:07.684093   87367 retry.go:31] will retry after 2.880856513s: waiting for machine to come up
	I1104 12:08:10.566197   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.566657   86402 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 12:08:10.566675   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 12:08:10.566687   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.567139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.567166   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 12:08:10.567186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | skip adding static IP to network mk-old-k8s-version-589257 - found existing host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"}
	I1104 12:08:10.567199   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 12:08:10.567213   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 12:08:10.569500   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569816   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.569846   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 12:08:10.570004   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 12:08:10.570025   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:10.570033   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 12:08:10.570041   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 12:08:10.697114   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:10.697552   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 12:08:10.698196   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:10.700982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701369   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.701403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701649   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:08:10.701875   86402 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:10.701898   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:10.702099   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.704605   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.704977   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.705006   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.705151   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.705342   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705486   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705602   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.705703   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.705907   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.705918   86402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:10.813494   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:10.813544   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.813816   86402 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 12:08:10.813847   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.814034   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.816782   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.817245   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817394   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.817589   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817760   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817882   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.818027   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.818227   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.818245   86402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 12:08:10.940779   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 12:08:10.940803   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.943694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944062   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.944090   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944263   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.944452   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944627   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944767   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.944910   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.945093   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.945110   86402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:11.061924   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:11.061966   86402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:11.062007   86402 buildroot.go:174] setting up certificates
	I1104 12:08:11.062021   86402 provision.go:84] configureAuth start
	I1104 12:08:11.062033   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:11.062293   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.065165   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065559   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.065594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065834   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.068257   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068620   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.068646   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068787   86402 provision.go:143] copyHostCerts
	I1104 12:08:11.068842   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:11.068854   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:11.068904   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:11.068993   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:11.069000   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:11.069019   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:11.069072   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:11.069079   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:11.069097   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:11.069191   86402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 12:08:11.271880   86402 provision.go:177] copyRemoteCerts
	I1104 12:08:11.271946   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:11.271988   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.275023   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275396   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.275428   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275701   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.275905   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.276048   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.276182   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.362968   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:11.388401   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 12:08:11.417180   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:11.439810   86402 provision.go:87] duration metric: took 377.778325ms to configureAuth
	I1104 12:08:11.439841   86402 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:11.440043   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:08:11.440110   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.442476   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.442783   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.442818   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.443005   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.443204   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443329   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.443665   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.443822   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.443837   86402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:11.662212   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:11.662241   86402 machine.go:96] duration metric: took 960.351823ms to provisionDockerMachine
	I1104 12:08:11.662256   86402 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 12:08:11.662269   86402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:11.662289   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.662613   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:11.662642   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.665028   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665391   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.665420   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665598   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.665776   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.665942   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.666064   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.747199   86402 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:11.751253   86402 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:11.751279   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:11.751356   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:11.751465   86402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:11.751591   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:11.760409   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:11.781890   86402 start.go:296] duration metric: took 119.620604ms for postStartSetup
	I1104 12:08:11.781934   86402 fix.go:56] duration metric: took 19.207938878s for fixHost
	I1104 12:08:11.781960   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.784767   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785058   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.785084   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785300   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.785500   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785644   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785750   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.785877   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.786047   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.786059   86402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:11.889540   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722091.863405264
	
	I1104 12:08:11.889568   86402 fix.go:216] guest clock: 1730722091.863405264
	I1104 12:08:11.889578   86402 fix.go:229] Guest: 2024-11-04 12:08:11.863405264 +0000 UTC Remote: 2024-11-04 12:08:11.781939603 +0000 UTC m=+230.132769870 (delta=81.465661ms)
	I1104 12:08:11.889631   86402 fix.go:200] guest clock delta is within tolerance: 81.465661ms
	I1104 12:08:11.889641   86402 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 19.315682928s
	I1104 12:08:11.889677   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.889975   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.892654   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.892982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.893012   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.893212   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893706   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893888   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893989   86402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:11.894031   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.894074   86402 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:11.894094   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.896812   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897020   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897192   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897454   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897478   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897631   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897646   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897778   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897911   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.897989   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.898083   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.898120   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.998704   86402 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:12.004820   86402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:12.148742   86402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:12.155015   86402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:12.155089   86402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:12.171054   86402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:12.171085   86402 start.go:495] detecting cgroup driver to use...
	I1104 12:08:12.171154   86402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:12.189977   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:12.204622   86402 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:12.204679   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:12.218808   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:12.232276   86402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:12.341220   86402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:12.512813   86402 docker.go:233] disabling docker service ...
	I1104 12:08:12.512893   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:12.526784   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:12.539774   86402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:12.666162   86402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:12.788317   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:12.802703   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:12.820915   86402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 12:08:12.820985   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.831311   86402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:12.831400   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.841625   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.852548   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.864683   86402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:12.876794   86402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:12.886878   86402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:12.886943   86402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:12.902476   86402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:12.914565   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:13.044125   86402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:13.149816   86402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:13.149893   86402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:13.154639   86402 start.go:563] Will wait 60s for crictl version
	I1104 12:08:13.154706   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:13.158788   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:13.200038   86402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:13.200117   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.233501   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.264558   86402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 12:08:13.266087   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:13.269660   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270200   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:13.270233   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270520   86402 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:13.274751   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:13.290348   86402 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:13.290483   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:08:13.290547   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:13.340338   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:13.340426   86402 ssh_runner.go:195] Run: which lz4
	I1104 12:08:13.345147   86402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:08:13.349792   86402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:08:13.349872   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 12:08:14.842720   86402 crio.go:462] duration metric: took 1.497615031s to copy over tarball
	I1104 12:08:14.842791   86402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:08:17.837381   86402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994557811s)
	I1104 12:08:17.837410   86402 crio.go:469] duration metric: took 2.994665886s to extract the tarball
	I1104 12:08:17.837420   86402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:08:17.882418   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:17.917035   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:17.917064   86402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:17.917195   86402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.917169   86402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.917164   86402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.917150   86402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.917283   86402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.917254   86402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.918943   86402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 12:08:17.919014   86402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.919025   86402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.070119   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.076604   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.078712   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.083777   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.087827   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.092838   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.110359   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 12:08:18.165523   86402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 12:08:18.165569   86402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.165617   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.213723   86402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 12:08:18.213784   86402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.213833   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.252171   86402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 12:08:18.252221   86402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.252270   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256482   86402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 12:08:18.256522   86402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.256567   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256606   86402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 12:08:18.256564   86402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 12:08:18.256631   86402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.256632   86402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.256632   86402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 12:08:18.256690   86402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 12:08:18.256657   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256703   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.256691   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.256738   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256658   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.264837   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.265836   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.349896   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.349935   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.350014   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.350077   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.368533   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.371302   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.371393   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.496042   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.496121   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.509196   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.509339   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.509247   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.509348   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.513943   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.645867   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 12:08:18.649173   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.649276   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.656159   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.656193   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 12:08:18.660309   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 12:08:18.660384   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 12:08:18.719995   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 12:08:18.720033   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 12:08:18.728304   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 12:08:18.867880   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:19.009342   86402 cache_images.go:92] duration metric: took 1.092257593s to LoadCachedImages
	W1104 12:08:19.009448   86402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1104 12:08:19.009469   86402 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 12:08:19.009590   86402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:19.009671   86402 ssh_runner.go:195] Run: crio config
	I1104 12:08:19.054831   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:08:19.054850   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:19.054863   86402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:19.054880   86402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 12:08:19.055049   86402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:19.055125   86402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 12:08:19.065804   86402 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:19.065888   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:19.075491   86402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 12:08:19.092371   86402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:19.108896   86402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 12:08:19.127622   86402 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:19.131597   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:19.145142   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:19.284780   86402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:19.303843   86402 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 12:08:19.303872   86402 certs.go:194] generating shared ca certs ...
	I1104 12:08:19.303894   86402 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.304084   86402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:19.304148   86402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:19.304161   86402 certs.go:256] generating profile certs ...
	I1104 12:08:19.304280   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 12:08:19.304347   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 12:08:19.304401   86402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 12:08:19.304549   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:19.304590   86402 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:19.304608   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:19.304659   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:19.304702   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:19.304729   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:19.304794   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:19.305479   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:19.341333   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:19.375179   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:19.410128   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:19.452565   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 12:08:19.493404   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:08:19.521178   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:19.550524   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:08:19.574903   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:19.599308   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:19.627107   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:19.657121   86402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:19.679087   86402 ssh_runner.go:195] Run: openssl version
	I1104 12:08:19.687115   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:19.702537   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707340   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707408   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.714955   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:19.727883   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:19.739690   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744600   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744656   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.750324   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:19.760988   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:19.772634   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777504   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777580   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.783660   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:19.795483   86402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:19.800327   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:19.806346   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:19.813920   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:19.820358   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:19.826359   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:19.832467   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:19.838902   86402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:19.839018   86402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:19.839075   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.880407   86402 cri.go:89] found id: ""
	I1104 12:08:19.880486   86402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:19.891135   86402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:19.891156   86402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:19.891219   86402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:19.901437   86402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:19.902325   86402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:19.902941   86402 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-589257" cluster setting kubeconfig missing "old-k8s-version-589257" context setting]
	I1104 12:08:19.903879   86402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.937877   86402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:19.948669   86402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.180
	I1104 12:08:19.948701   86402 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:19.948711   86402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:19.948752   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.988249   86402 cri.go:89] found id: ""
	I1104 12:08:19.988344   86402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:20.006949   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:20.020677   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:20.020700   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:20.020747   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:20.031509   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:20.031566   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:20.042229   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:20.054695   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:20.054810   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:20.067410   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.078639   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:20.078711   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.091357   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:20.100986   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:20.101071   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:20.110345   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:20.119778   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:20.281637   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.006838   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.234671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.335720   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.437522   86402 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:21.437615   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:21.938086   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.438198   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.938624   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.438021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.938119   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.438470   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.937687   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.438045   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.937696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.438585   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.937831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.938240   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.438463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.937958   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.437676   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.938298   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.937953   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.438075   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.938577   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.438561   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.938188   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.437856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.938433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.438381   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.938164   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.438120   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.937802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.438365   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.938295   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.437646   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.438623   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.938662   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.938048   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.438404   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.938494   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.437875   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.938001   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.438702   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.938239   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.438469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.437744   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.938478   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.437757   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.938035   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.438173   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.938016   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.438229   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.437950   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.437785   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.438413   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.938514   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.438658   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.938323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.438464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.937754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.938586   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.438391   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.938546   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.438433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.938312   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.437920   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.937779   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.438511   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.938464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.438108   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.438356   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.938694   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.938445   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.438137   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.937941   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.937760   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.438704   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.937956   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.438323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.438437   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.937675   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.437868   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.938703   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.438436   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.437963   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.938515   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.437754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.937856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.438729   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.938439   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.438421   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.938044   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.438456   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.438266   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.938153   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.437829   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.938469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.438336   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.938284   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.438073   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.937894   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:21.438135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:21.438238   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:21.471463   86402 cri.go:89] found id: ""
	I1104 12:09:21.471495   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.471507   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:21.471515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:21.471568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:21.509336   86402 cri.go:89] found id: ""
	I1104 12:09:21.509363   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.509373   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:21.509381   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:21.509441   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:21.545963   86402 cri.go:89] found id: ""
	I1104 12:09:21.545987   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.545995   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:21.546000   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:21.546059   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:21.580707   86402 cri.go:89] found id: ""
	I1104 12:09:21.580737   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.580748   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:21.580755   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:21.580820   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:21.613763   86402 cri.go:89] found id: ""
	I1104 12:09:21.613791   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.613801   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:21.613809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:21.613872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:21.646559   86402 cri.go:89] found id: ""
	I1104 12:09:21.646583   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.646591   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:21.646597   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:21.646643   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:21.681439   86402 cri.go:89] found id: ""
	I1104 12:09:21.681467   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.681479   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:21.681486   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:21.681554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:21.713875   86402 cri.go:89] found id: ""
	I1104 12:09:21.713899   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.713907   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:21.713915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:21.713925   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:21.763882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:21.763918   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:21.778590   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:21.778615   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:21.892208   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:21.892235   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:21.892250   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:21.965946   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:21.965984   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:24.502992   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:24.514899   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:24.514960   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:24.554466   86402 cri.go:89] found id: ""
	I1104 12:09:24.554491   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.554501   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:24.554510   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:24.554567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:24.591532   86402 cri.go:89] found id: ""
	I1104 12:09:24.591560   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.591572   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:24.591580   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:24.591638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:24.625436   86402 cri.go:89] found id: ""
	I1104 12:09:24.625467   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.625478   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:24.625485   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:24.625544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:24.658317   86402 cri.go:89] found id: ""
	I1104 12:09:24.658346   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.658357   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:24.658364   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:24.658426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:24.692811   86402 cri.go:89] found id: ""
	I1104 12:09:24.692839   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.692850   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:24.692857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:24.692917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:24.729677   86402 cri.go:89] found id: ""
	I1104 12:09:24.729708   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.729719   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:24.729726   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:24.729773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:24.768575   86402 cri.go:89] found id: ""
	I1104 12:09:24.768598   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.768608   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:24.768615   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:24.768681   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:24.802344   86402 cri.go:89] found id: ""
	I1104 12:09:24.802368   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.802375   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:24.802383   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:24.802394   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:24.855882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:24.855915   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:24.869199   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:24.869243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:24.940720   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:24.940744   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:24.940758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:25.016139   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:25.016177   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.553297   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:27.566857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:27.566913   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:27.599606   86402 cri.go:89] found id: ""
	I1104 12:09:27.599641   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.599653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:27.599661   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:27.599721   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:27.633818   86402 cri.go:89] found id: ""
	I1104 12:09:27.633841   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.633849   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:27.633854   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:27.633907   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:27.668088   86402 cri.go:89] found id: ""
	I1104 12:09:27.668120   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.668129   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:27.668135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:27.668185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:27.699401   86402 cri.go:89] found id: ""
	I1104 12:09:27.699433   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.699445   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:27.699453   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:27.699511   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:27.731422   86402 cri.go:89] found id: ""
	I1104 12:09:27.731448   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.731459   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:27.731466   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:27.731528   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:27.762808   86402 cri.go:89] found id: ""
	I1104 12:09:27.762839   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.762850   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:27.762857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:27.762917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:27.794729   86402 cri.go:89] found id: ""
	I1104 12:09:27.794757   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.794765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:27.794771   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:27.794826   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:27.825694   86402 cri.go:89] found id: ""
	I1104 12:09:27.825716   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.825724   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:27.825731   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:27.825742   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.862111   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:27.862140   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:27.911169   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:27.911204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:27.924207   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:27.924232   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:27.995123   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:27.995153   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:27.995167   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:30.580831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:30.594901   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:30.594959   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:30.630936   86402 cri.go:89] found id: ""
	I1104 12:09:30.630961   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.630971   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:30.630979   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:30.631034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:30.669288   86402 cri.go:89] found id: ""
	I1104 12:09:30.669311   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.669320   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:30.669328   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:30.669388   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:30.706288   86402 cri.go:89] found id: ""
	I1104 12:09:30.706312   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.706319   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:30.706325   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:30.706384   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:30.739027   86402 cri.go:89] found id: ""
	I1104 12:09:30.739057   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.739069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:30.739078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:30.739137   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:30.772247   86402 cri.go:89] found id: ""
	I1104 12:09:30.772272   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.772280   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:30.772286   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:30.772338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:30.810327   86402 cri.go:89] found id: ""
	I1104 12:09:30.810360   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.810370   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:30.810375   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:30.810426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:30.842241   86402 cri.go:89] found id: ""
	I1104 12:09:30.842271   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.842279   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:30.842285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:30.842332   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:30.877003   86402 cri.go:89] found id: ""
	I1104 12:09:30.877032   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.877043   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:30.877052   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:30.877077   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:30.925783   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:30.925816   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:30.939651   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:30.939680   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:31.029176   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:31.029210   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:31.029244   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:31.116311   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:31.116348   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:33.653267   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:33.665813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:33.665878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:33.701812   86402 cri.go:89] found id: ""
	I1104 12:09:33.701839   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.701852   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:33.701860   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:33.701922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:33.738816   86402 cri.go:89] found id: ""
	I1104 12:09:33.738850   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.738861   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:33.738868   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:33.738928   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:33.773936   86402 cri.go:89] found id: ""
	I1104 12:09:33.773960   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.773968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:33.773976   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:33.774031   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:33.808049   86402 cri.go:89] found id: ""
	I1104 12:09:33.808079   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.808091   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:33.808098   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:33.808154   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:33.844276   86402 cri.go:89] found id: ""
	I1104 12:09:33.844303   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.844314   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:33.844322   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:33.844443   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:33.879736   86402 cri.go:89] found id: ""
	I1104 12:09:33.879772   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.879782   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:33.879788   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:33.879843   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:33.913717   86402 cri.go:89] found id: ""
	I1104 12:09:33.913750   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.913761   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:33.913769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:33.913832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:33.949632   86402 cri.go:89] found id: ""
	I1104 12:09:33.949658   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.949667   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:33.949677   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:33.949691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:34.019770   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:34.019790   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:34.019806   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:34.101493   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:34.101524   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:34.146723   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:34.146751   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:34.196295   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:34.196338   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:36.709951   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:36.724723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:36.724782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:36.777406   86402 cri.go:89] found id: ""
	I1104 12:09:36.777440   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.777451   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:36.777459   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:36.777520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:36.834486   86402 cri.go:89] found id: ""
	I1104 12:09:36.834516   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.834527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:36.834535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:36.834641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:36.868828   86402 cri.go:89] found id: ""
	I1104 12:09:36.868853   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.868861   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:36.868867   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:36.868912   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:36.900942   86402 cri.go:89] found id: ""
	I1104 12:09:36.900972   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.900980   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:36.900986   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:36.901043   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:36.933215   86402 cri.go:89] found id: ""
	I1104 12:09:36.933265   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.933276   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:36.933282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:36.933330   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:36.966753   86402 cri.go:89] found id: ""
	I1104 12:09:36.966776   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.966784   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:36.966789   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:36.966850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:37.000050   86402 cri.go:89] found id: ""
	I1104 12:09:37.000074   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.000082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:37.000087   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:37.000144   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:37.033252   86402 cri.go:89] found id: ""
	I1104 12:09:37.033283   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.033295   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:37.033305   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:37.033328   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:37.085351   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:37.085383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:37.098556   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:37.098582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:37.167489   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:37.167512   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:37.167525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:37.243292   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:37.243325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:39.781468   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:39.795630   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:39.795756   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:39.833745   86402 cri.go:89] found id: ""
	I1104 12:09:39.833779   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.833791   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:39.833798   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:39.833862   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:39.870075   86402 cri.go:89] found id: ""
	I1104 12:09:39.870096   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.870106   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:39.870119   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:39.870173   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:39.905807   86402 cri.go:89] found id: ""
	I1104 12:09:39.905836   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.905846   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:39.905854   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:39.905916   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:39.941890   86402 cri.go:89] found id: ""
	I1104 12:09:39.941914   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.941922   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:39.941932   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:39.941978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:39.979123   86402 cri.go:89] found id: ""
	I1104 12:09:39.979150   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.979159   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:39.979165   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:39.979220   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:40.014748   86402 cri.go:89] found id: ""
	I1104 12:09:40.014777   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.014785   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:40.014791   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:40.014882   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:40.049977   86402 cri.go:89] found id: ""
	I1104 12:09:40.050004   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.050014   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:40.050021   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:40.050100   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:40.085630   86402 cri.go:89] found id: ""
	I1104 12:09:40.085663   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.085674   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:40.085685   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:40.085701   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:40.166611   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:40.166650   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:40.203117   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:40.203155   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:40.256233   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:40.256267   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:40.270009   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:40.270042   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:40.338672   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:42.839402   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:42.852881   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:42.852947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:42.884587   86402 cri.go:89] found id: ""
	I1104 12:09:42.884614   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.884624   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:42.884631   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:42.884690   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:42.915286   86402 cri.go:89] found id: ""
	I1104 12:09:42.915316   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.915327   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:42.915337   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:42.915399   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:42.945827   86402 cri.go:89] found id: ""
	I1104 12:09:42.945857   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.945868   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:42.945875   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:42.945934   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:42.982662   86402 cri.go:89] found id: ""
	I1104 12:09:42.982693   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.982703   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:42.982712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:42.982788   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:43.015337   86402 cri.go:89] found id: ""
	I1104 12:09:43.015371   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.015382   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:43.015390   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:43.015453   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:43.048235   86402 cri.go:89] found id: ""
	I1104 12:09:43.048262   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.048270   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:43.048276   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:43.048351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:43.080636   86402 cri.go:89] found id: ""
	I1104 12:09:43.080668   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.080679   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:43.080687   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:43.080746   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:43.113986   86402 cri.go:89] found id: ""
	I1104 12:09:43.114011   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.114019   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:43.114027   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:43.114038   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:43.165356   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:43.165390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:43.179167   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:43.179200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:43.250054   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:43.250083   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:43.250098   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:43.328970   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:43.329002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:45.869879   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:45.883262   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:45.883359   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:45.921978   86402 cri.go:89] found id: ""
	I1104 12:09:45.922003   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.922011   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:45.922016   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:45.922076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:45.954668   86402 cri.go:89] found id: ""
	I1104 12:09:45.954697   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.954710   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:45.954717   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:45.954787   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:45.987793   86402 cri.go:89] found id: ""
	I1104 12:09:45.987826   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.987837   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:45.987845   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:45.987906   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:46.028517   86402 cri.go:89] found id: ""
	I1104 12:09:46.028550   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.028558   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:46.028563   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:46.028621   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:46.063832   86402 cri.go:89] found id: ""
	I1104 12:09:46.063859   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.063870   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:46.063878   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:46.063942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:46.099981   86402 cri.go:89] found id: ""
	I1104 12:09:46.100011   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.100027   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:46.100036   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:46.100169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:46.133060   86402 cri.go:89] found id: ""
	I1104 12:09:46.133083   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.133092   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:46.133099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:46.133165   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:46.170559   86402 cri.go:89] found id: ""
	I1104 12:09:46.170583   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.170591   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:46.170599   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:46.170610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:46.253202   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:46.253253   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:46.288468   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:46.288498   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:46.339322   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:46.339354   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:46.353020   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:46.353049   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:46.420328   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:48.920709   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:48.933443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:48.933507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:48.964736   86402 cri.go:89] found id: ""
	I1104 12:09:48.964759   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.964770   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:48.964777   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:48.964837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:48.996646   86402 cri.go:89] found id: ""
	I1104 12:09:48.996670   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.996679   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:48.996684   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:48.996734   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:49.028899   86402 cri.go:89] found id: ""
	I1104 12:09:49.028942   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.028951   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:49.028957   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:49.029015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:49.065032   86402 cri.go:89] found id: ""
	I1104 12:09:49.065056   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.065064   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:49.065075   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:49.065120   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:49.097159   86402 cri.go:89] found id: ""
	I1104 12:09:49.097183   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.097191   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:49.097196   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:49.097269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:49.131578   86402 cri.go:89] found id: ""
	I1104 12:09:49.131608   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.131619   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:49.131626   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:49.131684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:49.164307   86402 cri.go:89] found id: ""
	I1104 12:09:49.164339   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.164358   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:49.164367   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:49.164430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:49.197171   86402 cri.go:89] found id: ""
	I1104 12:09:49.197199   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.197210   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:49.197220   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:49.197251   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:49.210327   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:49.210355   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:49.280226   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:49.280251   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:49.280262   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:49.367655   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:49.367691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:49.408424   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:49.408452   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:51.958148   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:51.970451   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:51.970521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:52.000896   86402 cri.go:89] found id: ""
	I1104 12:09:52.000929   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.000940   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:52.000948   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:52.001023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:52.034122   86402 cri.go:89] found id: ""
	I1104 12:09:52.034150   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.034161   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:52.034168   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:52.034227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:52.070834   86402 cri.go:89] found id: ""
	I1104 12:09:52.070872   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.070884   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:52.070891   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:52.070950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:52.103730   86402 cri.go:89] found id: ""
	I1104 12:09:52.103758   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.103766   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:52.103772   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:52.103832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:52.135980   86402 cri.go:89] found id: ""
	I1104 12:09:52.136006   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.136014   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:52.136020   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:52.136081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:52.168903   86402 cri.go:89] found id: ""
	I1104 12:09:52.168928   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.168936   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:52.168942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:52.169001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:52.199499   86402 cri.go:89] found id: ""
	I1104 12:09:52.199529   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.199539   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:52.199546   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:52.199610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:52.232566   86402 cri.go:89] found id: ""
	I1104 12:09:52.232603   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.232615   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:52.232626   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:52.232640   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:52.282140   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:52.282180   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:52.295079   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:52.295110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:52.364061   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:52.364087   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:52.364102   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:52.437868   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:52.437901   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:54.978182   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:54.991002   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:54.991068   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:55.023628   86402 cri.go:89] found id: ""
	I1104 12:09:55.023656   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.023663   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:55.023669   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:55.023715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:55.058524   86402 cri.go:89] found id: ""
	I1104 12:09:55.058548   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.058557   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:55.058564   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:55.058634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:55.095730   86402 cri.go:89] found id: ""
	I1104 12:09:55.095760   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.095772   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:55.095779   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:55.095837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:55.128341   86402 cri.go:89] found id: ""
	I1104 12:09:55.128365   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.128373   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:55.128379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:55.128438   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:55.160655   86402 cri.go:89] found id: ""
	I1104 12:09:55.160681   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.160693   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:55.160700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:55.160754   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:55.194050   86402 cri.go:89] found id: ""
	I1104 12:09:55.194077   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.194086   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:55.194091   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:55.194138   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:55.227655   86402 cri.go:89] found id: ""
	I1104 12:09:55.227694   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.227705   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:55.227712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:55.227810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:55.261106   86402 cri.go:89] found id: ""
	I1104 12:09:55.261137   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.261147   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:55.261157   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:55.261171   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:55.335577   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:55.335598   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:55.335610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:55.421339   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:55.421375   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:55.459936   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:55.459967   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:55.509346   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:55.509382   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.023608   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:58.036540   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:58.036599   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:58.075104   86402 cri.go:89] found id: ""
	I1104 12:09:58.075182   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.075198   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:58.075207   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:58.075271   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:58.109910   86402 cri.go:89] found id: ""
	I1104 12:09:58.109949   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.109961   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:58.109968   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:58.110038   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:58.142829   86402 cri.go:89] found id: ""
	I1104 12:09:58.142854   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.142865   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:58.142873   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:58.142924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:58.178125   86402 cri.go:89] found id: ""
	I1104 12:09:58.178153   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.178161   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:58.178168   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:58.178239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:58.214117   86402 cri.go:89] found id: ""
	I1104 12:09:58.214146   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.214156   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:58.214162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:58.214213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:58.244728   86402 cri.go:89] found id: ""
	I1104 12:09:58.244751   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.244759   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:58.244765   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:58.244809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:58.275542   86402 cri.go:89] found id: ""
	I1104 12:09:58.275568   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.275576   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:58.275582   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:58.275630   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:58.314909   86402 cri.go:89] found id: ""
	I1104 12:09:58.314935   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.314943   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:58.314952   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:58.314962   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:58.364361   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:58.364390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.378483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:58.378517   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:58.442012   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:58.442033   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:58.442045   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:58.517260   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:58.517298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.057203   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:01.069937   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:01.070008   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:01.101672   86402 cri.go:89] found id: ""
	I1104 12:10:01.101698   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.101709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:01.101716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:01.101779   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:01.134672   86402 cri.go:89] found id: ""
	I1104 12:10:01.134701   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.134712   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:01.134719   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:01.134789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:01.167784   86402 cri.go:89] found id: ""
	I1104 12:10:01.167833   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.167845   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:01.167853   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:01.167945   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:01.201218   86402 cri.go:89] found id: ""
	I1104 12:10:01.201260   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.201271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:01.201281   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:01.201338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:01.234964   86402 cri.go:89] found id: ""
	I1104 12:10:01.234991   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.235000   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:01.235007   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:01.235069   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:01.267809   86402 cri.go:89] found id: ""
	I1104 12:10:01.267848   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.267881   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:01.267890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:01.267942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:01.303567   86402 cri.go:89] found id: ""
	I1104 12:10:01.303590   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.303598   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:01.303604   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:01.303648   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:01.342059   86402 cri.go:89] found id: ""
	I1104 12:10:01.342088   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.342099   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:01.342109   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:01.342142   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:01.354845   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:01.354867   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:01.423426   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:01.423447   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:01.423459   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:01.498979   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:01.499018   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.537658   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:01.537691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.088653   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:04.103506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:04.103576   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:04.137574   86402 cri.go:89] found id: ""
	I1104 12:10:04.137602   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.137612   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:04.137620   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:04.137684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:04.177624   86402 cri.go:89] found id: ""
	I1104 12:10:04.177662   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.177673   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:04.177681   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:04.177750   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:04.213829   86402 cri.go:89] found id: ""
	I1104 12:10:04.213850   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.213862   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:04.213870   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:04.213929   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:04.251112   86402 cri.go:89] found id: ""
	I1104 12:10:04.251143   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.251154   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:04.251162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:04.251227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:04.286005   86402 cri.go:89] found id: ""
	I1104 12:10:04.286036   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.286046   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:04.286053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:04.286118   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:04.317628   86402 cri.go:89] found id: ""
	I1104 12:10:04.317656   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.317667   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:04.317674   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:04.317742   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:04.351663   86402 cri.go:89] found id: ""
	I1104 12:10:04.351687   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.351695   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:04.351700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:04.351755   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:04.385818   86402 cri.go:89] found id: ""
	I1104 12:10:04.385842   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.385850   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:04.385858   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:04.385880   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:04.467141   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:04.467179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:04.503669   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:04.503700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.557237   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:04.557303   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:04.570484   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:04.570520   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:04.635099   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.135741   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:07.148039   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:07.148132   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:07.185171   86402 cri.go:89] found id: ""
	I1104 12:10:07.185196   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.185205   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:07.185211   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:07.185280   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:07.217097   86402 cri.go:89] found id: ""
	I1104 12:10:07.217126   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.217137   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:07.217144   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:07.217204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:07.250079   86402 cri.go:89] found id: ""
	I1104 12:10:07.250108   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.250116   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:07.250121   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:07.250169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:07.283423   86402 cri.go:89] found id: ""
	I1104 12:10:07.283463   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.283475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:07.283482   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:07.283554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:07.316461   86402 cri.go:89] found id: ""
	I1104 12:10:07.316490   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.316507   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:07.316513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:07.316569   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:07.361981   86402 cri.go:89] found id: ""
	I1104 12:10:07.362010   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.362018   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:07.362024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:07.362087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:07.397834   86402 cri.go:89] found id: ""
	I1104 12:10:07.397867   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.397878   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:07.397886   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:07.397948   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:07.429379   86402 cri.go:89] found id: ""
	I1104 12:10:07.429407   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.429416   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:07.429425   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:07.429438   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:07.495294   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.495322   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:07.495334   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:07.578504   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:07.578546   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:07.617172   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:07.617201   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:07.667168   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:07.667204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.181802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:10.196017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:10.196084   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:10.228243   86402 cri.go:89] found id: ""
	I1104 12:10:10.228272   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.228282   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:10.228289   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:10.228347   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:10.262110   86402 cri.go:89] found id: ""
	I1104 12:10:10.262143   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.262152   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:10.262161   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:10.262218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:10.297776   86402 cri.go:89] found id: ""
	I1104 12:10:10.297812   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.297823   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:10.297830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:10.297877   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:10.332645   86402 cri.go:89] found id: ""
	I1104 12:10:10.332672   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.332680   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:10.332685   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:10.332730   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:10.366703   86402 cri.go:89] found id: ""
	I1104 12:10:10.366735   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.366746   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:10.366754   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:10.366809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:10.399500   86402 cri.go:89] found id: ""
	I1104 12:10:10.399526   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.399534   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:10.399539   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:10.399634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:10.434898   86402 cri.go:89] found id: ""
	I1104 12:10:10.434932   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.434943   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:10.434951   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:10.435022   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:10.472159   86402 cri.go:89] found id: ""
	I1104 12:10:10.472189   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.472201   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:10.472225   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:10.472246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:10.528710   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:10.528769   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.541943   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:10.541973   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:10.621819   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:10.621843   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:10.621855   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:10.698301   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:10.698335   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.235151   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:13.247511   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:13.247585   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:13.278546   86402 cri.go:89] found id: ""
	I1104 12:10:13.278576   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.278586   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:13.278592   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:13.278655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:13.310297   86402 cri.go:89] found id: ""
	I1104 12:10:13.310325   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.310334   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:13.310340   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:13.310394   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:13.344110   86402 cri.go:89] found id: ""
	I1104 12:10:13.344139   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.344150   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:13.344158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:13.344210   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:13.379778   86402 cri.go:89] found id: ""
	I1104 12:10:13.379806   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.379817   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:13.379824   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:13.379872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:13.411763   86402 cri.go:89] found id: ""
	I1104 12:10:13.411795   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.411806   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:13.411813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:13.411872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:13.445192   86402 cri.go:89] found id: ""
	I1104 12:10:13.445217   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.445235   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:13.445243   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:13.445297   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:13.478518   86402 cri.go:89] found id: ""
	I1104 12:10:13.478549   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.478561   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:13.478569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:13.478710   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:13.513852   86402 cri.go:89] found id: ""
	I1104 12:10:13.513878   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.513886   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:13.513895   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:13.513909   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:13.590413   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:13.590439   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:13.590454   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:13.664575   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:13.664608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.700616   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:13.700644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:13.751113   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:13.751147   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:16.264311   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:16.277443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:16.277508   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:16.309983   86402 cri.go:89] found id: ""
	I1104 12:10:16.310010   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.310020   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:16.310025   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:16.310073   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:16.358281   86402 cri.go:89] found id: ""
	I1104 12:10:16.358305   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.358312   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:16.358317   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:16.358376   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:16.394455   86402 cri.go:89] found id: ""
	I1104 12:10:16.394485   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.394497   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:16.394503   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:16.394571   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:16.430606   86402 cri.go:89] found id: ""
	I1104 12:10:16.430638   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.430648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:16.430655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:16.430716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:16.464402   86402 cri.go:89] found id: ""
	I1104 12:10:16.464439   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.464450   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:16.464458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:16.464517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:16.497985   86402 cri.go:89] found id: ""
	I1104 12:10:16.498009   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.498017   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:16.498022   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:16.498076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:16.531255   86402 cri.go:89] found id: ""
	I1104 12:10:16.531289   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.531301   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:16.531309   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:16.531372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:16.566176   86402 cri.go:89] found id: ""
	I1104 12:10:16.566204   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.566213   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:16.566228   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:16.566243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:16.634157   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:16.634196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:16.634218   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:16.710518   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:16.710550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:16.746572   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:16.746608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:16.797146   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:16.797179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.310286   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:19.323409   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:19.323473   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:19.360864   86402 cri.go:89] found id: ""
	I1104 12:10:19.360893   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.360902   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:19.360907   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:19.360962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:19.400127   86402 cri.go:89] found id: ""
	I1104 12:10:19.400155   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.400167   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:19.400174   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:19.400230   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:19.433023   86402 cri.go:89] found id: ""
	I1104 12:10:19.433049   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.433057   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:19.433062   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:19.433123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:19.467786   86402 cri.go:89] found id: ""
	I1104 12:10:19.467810   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.467819   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:19.467825   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:19.467875   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:19.498411   86402 cri.go:89] found id: ""
	I1104 12:10:19.498436   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.498444   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:19.498455   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:19.498502   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:19.532146   86402 cri.go:89] found id: ""
	I1104 12:10:19.532171   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.532179   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:19.532184   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:19.532234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:19.567271   86402 cri.go:89] found id: ""
	I1104 12:10:19.567294   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.567302   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:19.567308   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:19.567369   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:19.608233   86402 cri.go:89] found id: ""
	I1104 12:10:19.608265   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.608279   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:19.608289   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:19.608304   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:19.649039   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:19.649071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:19.702129   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:19.702168   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.716749   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:19.716776   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:19.787538   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:19.787560   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:19.787572   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.368982   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:22.382889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:22.382962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:22.418672   86402 cri.go:89] found id: ""
	I1104 12:10:22.418698   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.418709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:22.418716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:22.418782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:22.451675   86402 cri.go:89] found id: ""
	I1104 12:10:22.451704   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.451715   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:22.451723   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:22.451785   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:22.488520   86402 cri.go:89] found id: ""
	I1104 12:10:22.488549   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.488561   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:22.488567   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:22.488631   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:22.530288   86402 cri.go:89] found id: ""
	I1104 12:10:22.530312   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.530321   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:22.530326   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:22.530382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:22.564929   86402 cri.go:89] found id: ""
	I1104 12:10:22.564958   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.564970   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:22.564977   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:22.565036   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:22.598015   86402 cri.go:89] found id: ""
	I1104 12:10:22.598042   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.598051   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:22.598056   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:22.598160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:22.632894   86402 cri.go:89] found id: ""
	I1104 12:10:22.632921   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.632930   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:22.632935   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:22.633001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:22.665194   86402 cri.go:89] found id: ""
	I1104 12:10:22.665218   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.665245   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:22.665257   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:22.665272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:22.717731   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:22.717763   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:22.732671   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:22.732698   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:22.823908   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:22.823946   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:22.823963   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.907812   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:22.907848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.449308   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:25.461694   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:25.461751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:25.493036   86402 cri.go:89] found id: ""
	I1104 12:10:25.493061   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.493068   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:25.493075   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:25.493122   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:25.525084   86402 cri.go:89] found id: ""
	I1104 12:10:25.525116   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.525128   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:25.525135   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:25.525196   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:25.561380   86402 cri.go:89] found id: ""
	I1104 12:10:25.561424   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.561436   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:25.561444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:25.561499   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:25.595429   86402 cri.go:89] found id: ""
	I1104 12:10:25.595453   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.595468   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:25.595474   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:25.595521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:25.627409   86402 cri.go:89] found id: ""
	I1104 12:10:25.627436   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.627445   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:25.627450   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:25.627497   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:25.661048   86402 cri.go:89] found id: ""
	I1104 12:10:25.661073   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.661082   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:25.661088   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:25.661135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:25.698882   86402 cri.go:89] found id: ""
	I1104 12:10:25.698912   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.698920   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:25.698926   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:25.698978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:25.733355   86402 cri.go:89] found id: ""
	I1104 12:10:25.733397   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.733409   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:25.733420   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:25.733435   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:25.784871   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:25.784908   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:25.798715   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:25.798740   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:25.870362   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:25.870383   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:25.870397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:25.950565   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:25.950598   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:28.488258   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:28.506058   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:28.506114   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:28.566325   86402 cri.go:89] found id: ""
	I1104 12:10:28.566351   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.566358   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:28.566364   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:28.566413   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:28.612753   86402 cri.go:89] found id: ""
	I1104 12:10:28.612781   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.612790   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:28.612796   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:28.612854   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:28.647082   86402 cri.go:89] found id: ""
	I1104 12:10:28.647109   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.647120   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:28.647128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:28.647205   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:28.683197   86402 cri.go:89] found id: ""
	I1104 12:10:28.683227   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.683239   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:28.683247   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:28.683299   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:28.718139   86402 cri.go:89] found id: ""
	I1104 12:10:28.718175   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.718186   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:28.718194   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:28.718253   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:28.749689   86402 cri.go:89] found id: ""
	I1104 12:10:28.749721   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.749732   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:28.749739   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:28.749803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:28.786824   86402 cri.go:89] found id: ""
	I1104 12:10:28.786851   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.786859   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:28.786864   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:28.786925   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:28.822833   86402 cri.go:89] found id: ""
	I1104 12:10:28.822856   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.822865   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:28.822872   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:28.822884   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:28.835267   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:28.835298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:28.900051   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:28.900076   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:28.900089   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:28.979867   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:28.979912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:29.017294   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:29.017327   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:31.569559   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:31.582065   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:31.582136   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:31.614924   86402 cri.go:89] found id: ""
	I1104 12:10:31.614952   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.614960   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:31.614966   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:31.615029   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:31.647178   86402 cri.go:89] found id: ""
	I1104 12:10:31.647204   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.647212   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:31.647218   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:31.647277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:31.678723   86402 cri.go:89] found id: ""
	I1104 12:10:31.678749   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.678761   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:31.678769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:31.678819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:31.713013   86402 cri.go:89] found id: ""
	I1104 12:10:31.713036   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.713043   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:31.713048   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:31.713092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:31.746564   86402 cri.go:89] found id: ""
	I1104 12:10:31.746591   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.746600   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:31.746605   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:31.746658   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:31.779559   86402 cri.go:89] found id: ""
	I1104 12:10:31.779586   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.779594   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:31.779601   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:31.779652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:31.812047   86402 cri.go:89] found id: ""
	I1104 12:10:31.812076   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.812087   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:31.812094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:31.812163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:31.845479   86402 cri.go:89] found id: ""
	I1104 12:10:31.845510   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.845522   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:31.845532   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:31.845551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:31.909399   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:31.909423   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:31.909434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:31.985994   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:31.986031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:32.023222   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:32.023255   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:32.074429   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:32.074467   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.588202   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:34.600925   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:34.600994   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:34.632718   86402 cri.go:89] found id: ""
	I1104 12:10:34.632743   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.632754   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:34.632763   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:34.632813   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:34.665553   86402 cri.go:89] found id: ""
	I1104 12:10:34.665576   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.665585   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:34.665590   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:34.665641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:34.700059   86402 cri.go:89] found id: ""
	I1104 12:10:34.700081   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.700089   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:34.700094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:34.700141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:34.732940   86402 cri.go:89] found id: ""
	I1104 12:10:34.732962   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.732970   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:34.732978   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:34.733023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:34.764580   86402 cri.go:89] found id: ""
	I1104 12:10:34.764610   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.764618   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:34.764624   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:34.764680   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:34.798030   86402 cri.go:89] found id: ""
	I1104 12:10:34.798053   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.798061   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:34.798067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:34.798115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:34.829847   86402 cri.go:89] found id: ""
	I1104 12:10:34.829876   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.829884   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:34.829889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:34.829946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:34.862764   86402 cri.go:89] found id: ""
	I1104 12:10:34.862792   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.862804   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:34.862815   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:34.862828   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:34.912367   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:34.912397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.925347   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:34.925383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:34.990459   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:34.990486   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:34.990502   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:35.066765   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:35.066796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:37.602696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:37.615041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:37.615115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:37.646872   86402 cri.go:89] found id: ""
	I1104 12:10:37.646900   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.646911   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:37.646918   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:37.646977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:37.679770   86402 cri.go:89] found id: ""
	I1104 12:10:37.679797   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.679805   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:37.679810   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:37.679867   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:37.711693   86402 cri.go:89] found id: ""
	I1104 12:10:37.711720   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.711733   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:37.711743   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:37.711803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:37.746605   86402 cri.go:89] found id: ""
	I1104 12:10:37.746636   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.746648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:37.746656   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:37.746716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:37.778983   86402 cri.go:89] found id: ""
	I1104 12:10:37.779010   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.779020   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:37.779026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:37.779086   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:37.813293   86402 cri.go:89] found id: ""
	I1104 12:10:37.813321   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.813330   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:37.813335   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:37.813387   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:37.846181   86402 cri.go:89] found id: ""
	I1104 12:10:37.846209   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.846219   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:37.846226   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:37.846287   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:37.877485   86402 cri.go:89] found id: ""
	I1104 12:10:37.877520   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.877531   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:37.877541   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:37.877558   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:37.926704   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:37.926733   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:37.939771   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:37.939796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:38.003762   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:38.003783   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:38.003800   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:38.085419   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:38.085456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.625351   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:40.637380   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:40.637459   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:40.670274   86402 cri.go:89] found id: ""
	I1104 12:10:40.670303   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.670315   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:40.670322   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:40.670382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:40.703383   86402 cri.go:89] found id: ""
	I1104 12:10:40.703414   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.703427   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:40.703434   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:40.703481   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:40.739549   86402 cri.go:89] found id: ""
	I1104 12:10:40.739576   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.739586   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:40.739594   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:40.739651   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:40.775466   86402 cri.go:89] found id: ""
	I1104 12:10:40.775492   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.775502   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:40.775513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:40.775567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:40.810486   86402 cri.go:89] found id: ""
	I1104 12:10:40.810515   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.810525   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:40.810533   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:40.810593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:40.844277   86402 cri.go:89] found id: ""
	I1104 12:10:40.844309   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.844321   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:40.844329   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:40.844391   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:40.878699   86402 cri.go:89] found id: ""
	I1104 12:10:40.878728   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.878739   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:40.878746   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:40.878804   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:40.913888   86402 cri.go:89] found id: ""
	I1104 12:10:40.913913   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.913921   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:40.913929   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:40.913939   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:40.966854   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:40.966892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:40.980483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:40.980510   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:41.046059   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:41.046085   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:41.046100   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:41.129746   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:41.129779   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:43.667029   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:43.680024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:43.680092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:43.714185   86402 cri.go:89] found id: ""
	I1104 12:10:43.714218   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.714227   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:43.714235   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:43.714294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:43.749493   86402 cri.go:89] found id: ""
	I1104 12:10:43.749515   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.749523   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:43.749529   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:43.749588   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:43.785400   86402 cri.go:89] found id: ""
	I1104 12:10:43.785426   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.785437   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:43.785444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:43.785507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:43.818465   86402 cri.go:89] found id: ""
	I1104 12:10:43.818505   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.818517   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:43.818524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:43.818573   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:43.850232   86402 cri.go:89] found id: ""
	I1104 12:10:43.850262   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.850272   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:43.850279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:43.850337   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:43.882806   86402 cri.go:89] found id: ""
	I1104 12:10:43.882840   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.882851   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:43.882859   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:43.882920   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:43.919449   86402 cri.go:89] found id: ""
	I1104 12:10:43.919476   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.919486   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:43.919493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:43.919556   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:43.953761   86402 cri.go:89] found id: ""
	I1104 12:10:43.953791   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.953801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:43.953812   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:43.953825   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:44.005559   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:44.005594   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:44.019431   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:44.019456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:44.094436   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:44.094457   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:44.094470   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:44.174026   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:44.174061   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:46.712021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:46.724258   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:46.724318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:46.754472   86402 cri.go:89] found id: ""
	I1104 12:10:46.754501   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.754510   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:46.754515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:46.754563   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:46.790184   86402 cri.go:89] found id: ""
	I1104 12:10:46.790209   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.790219   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:46.790226   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:46.790284   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:46.824840   86402 cri.go:89] found id: ""
	I1104 12:10:46.824865   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.824875   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:46.824882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:46.824952   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:46.857295   86402 cri.go:89] found id: ""
	I1104 12:10:46.857329   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.857360   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:46.857369   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:46.857430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:46.889540   86402 cri.go:89] found id: ""
	I1104 12:10:46.889571   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.889582   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:46.889588   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:46.889652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:46.930165   86402 cri.go:89] found id: ""
	I1104 12:10:46.930195   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.930204   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:46.930210   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:46.930266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:46.965964   86402 cri.go:89] found id: ""
	I1104 12:10:46.965994   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.966006   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:46.966013   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:46.966060   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:47.002700   86402 cri.go:89] found id: ""
	I1104 12:10:47.002732   86402 logs.go:282] 0 containers: []
	W1104 12:10:47.002741   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:47.002749   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:47.002760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:47.056362   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:47.056392   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:47.070447   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:47.070472   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:47.143207   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:47.143240   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:47.143256   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:47.223985   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:47.224015   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:49.765870   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:49.778288   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:49.778352   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:49.812012   86402 cri.go:89] found id: ""
	I1104 12:10:49.812044   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.812054   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:49.812064   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:49.812115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:49.847260   86402 cri.go:89] found id: ""
	I1104 12:10:49.847290   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.847301   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:49.847308   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:49.847361   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:49.877397   86402 cri.go:89] found id: ""
	I1104 12:10:49.877419   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.877427   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:49.877432   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:49.877486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:49.912453   86402 cri.go:89] found id: ""
	I1104 12:10:49.912484   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.912499   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:49.912506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:49.912572   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:49.948374   86402 cri.go:89] found id: ""
	I1104 12:10:49.948404   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.948416   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:49.948422   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:49.948488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:49.982190   86402 cri.go:89] found id: ""
	I1104 12:10:49.982216   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.982228   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:49.982236   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:49.982294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:50.014396   86402 cri.go:89] found id: ""
	I1104 12:10:50.014426   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.014437   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:50.014445   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:50.014507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:50.051770   86402 cri.go:89] found id: ""
	I1104 12:10:50.051793   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.051801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:50.051809   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:50.051820   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:50.116158   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:50.116185   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:50.116202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:50.194382   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:50.194431   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:50.235957   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:50.235983   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:50.290720   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:50.290750   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:52.805144   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:52.817686   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:52.817753   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:52.852470   86402 cri.go:89] found id: ""
	I1104 12:10:52.852492   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.852546   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:52.852559   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:52.852603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:52.889682   86402 cri.go:89] found id: ""
	I1104 12:10:52.889705   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.889714   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:52.889720   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:52.889773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:52.924490   86402 cri.go:89] found id: ""
	I1104 12:10:52.924525   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.924537   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:52.924544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:52.924604   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:52.957055   86402 cri.go:89] found id: ""
	I1104 12:10:52.957085   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.957094   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:52.957099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:52.957143   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:52.993379   86402 cri.go:89] found id: ""
	I1104 12:10:52.993411   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.993423   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:52.993430   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:52.993493   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:53.027365   86402 cri.go:89] found id: ""
	I1104 12:10:53.027398   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.027407   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:53.027412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:53.027488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:53.061048   86402 cri.go:89] found id: ""
	I1104 12:10:53.061074   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.061082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:53.061089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:53.061163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:53.101867   86402 cri.go:89] found id: ""
	I1104 12:10:53.101894   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.101904   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:53.101915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:53.101927   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:53.152314   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:53.152351   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:53.165630   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:53.165657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:53.239717   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:53.239739   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:53.239753   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:53.318140   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:53.318186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:55.857443   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:55.869524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:55.869608   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:55.900719   86402 cri.go:89] found id: ""
	I1104 12:10:55.900743   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.900753   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:55.900761   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:55.900821   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:55.932699   86402 cri.go:89] found id: ""
	I1104 12:10:55.932724   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.932734   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:55.932741   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:55.932798   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:55.964729   86402 cri.go:89] found id: ""
	I1104 12:10:55.964758   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.964767   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:55.964775   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:55.964823   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:55.997870   86402 cri.go:89] found id: ""
	I1104 12:10:55.997897   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.997907   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:55.997915   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:55.997977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:56.031707   86402 cri.go:89] found id: ""
	I1104 12:10:56.031736   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.031744   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:56.031749   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:56.031805   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:56.070839   86402 cri.go:89] found id: ""
	I1104 12:10:56.070863   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.070871   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:56.070877   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:56.070922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:56.109364   86402 cri.go:89] found id: ""
	I1104 12:10:56.109393   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.109404   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:56.109412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:56.109474   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:56.143369   86402 cri.go:89] found id: ""
	I1104 12:10:56.143402   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.143414   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:56.143424   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:56.143437   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:56.156924   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:56.156952   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:56.223624   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:56.223647   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:56.223659   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:56.302040   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:56.302082   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:56.343102   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:56.343150   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:58.896551   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:58.909034   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:58.909110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:58.944520   86402 cri.go:89] found id: ""
	I1104 12:10:58.944550   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.944559   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:58.944565   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:58.944612   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:58.980137   86402 cri.go:89] found id: ""
	I1104 12:10:58.980167   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.980176   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:58.980181   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:58.980231   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:59.014505   86402 cri.go:89] found id: ""
	I1104 12:10:59.014536   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.014545   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:59.014551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:59.014602   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:59.050616   86402 cri.go:89] found id: ""
	I1104 12:10:59.050642   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.050652   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:59.050659   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:59.050718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:59.084328   86402 cri.go:89] found id: ""
	I1104 12:10:59.084358   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.084369   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:59.084376   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:59.084449   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:59.116607   86402 cri.go:89] found id: ""
	I1104 12:10:59.116633   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.116642   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:59.116649   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:59.116711   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:59.149727   86402 cri.go:89] found id: ""
	I1104 12:10:59.149754   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.149765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:59.149773   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:59.149832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:59.182992   86402 cri.go:89] found id: ""
	I1104 12:10:59.183023   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.183035   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:59.183045   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:59.183059   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:59.234826   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:59.234862   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:59.248401   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:59.248427   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:59.317143   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:59.317171   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:59.317186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:59.397294   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:59.397336   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:01.933617   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:01.946458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:01.946537   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:01.981652   86402 cri.go:89] found id: ""
	I1104 12:11:01.981682   86402 logs.go:282] 0 containers: []
	W1104 12:11:01.981693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:01.981701   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:01.981757   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:02.014245   86402 cri.go:89] found id: ""
	I1104 12:11:02.014273   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.014282   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:02.014287   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:02.014350   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:02.047386   86402 cri.go:89] found id: ""
	I1104 12:11:02.047409   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.047420   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:02.047427   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:02.047488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:02.086427   86402 cri.go:89] found id: ""
	I1104 12:11:02.086464   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.086475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:02.086483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:02.086544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:02.120219   86402 cri.go:89] found id: ""
	I1104 12:11:02.120246   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.120255   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:02.120260   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:02.120318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:02.153832   86402 cri.go:89] found id: ""
	I1104 12:11:02.153864   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.153876   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:02.153884   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:02.153950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:02.186237   86402 cri.go:89] found id: ""
	I1104 12:11:02.186266   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.186278   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:02.186285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:02.186351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:02.219238   86402 cri.go:89] found id: ""
	I1104 12:11:02.219269   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.219280   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:02.219290   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:02.219301   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:02.301062   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:02.301099   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:02.358585   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:02.358617   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:02.414153   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:02.414200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:02.428429   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:02.428456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:02.497040   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:04.998089   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:05.010890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:05.010947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:05.046483   86402 cri.go:89] found id: ""
	I1104 12:11:05.046513   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.046523   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:05.046534   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:05.046594   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:05.079487   86402 cri.go:89] found id: ""
	I1104 12:11:05.079516   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.079527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:05.079535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:05.079595   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:05.110968   86402 cri.go:89] found id: ""
	I1104 12:11:05.110997   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.111004   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:05.111010   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:05.111057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:05.143372   86402 cri.go:89] found id: ""
	I1104 12:11:05.143398   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.143408   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:05.143415   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:05.143484   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:05.174691   86402 cri.go:89] found id: ""
	I1104 12:11:05.174717   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.174730   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:05.174737   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:05.174802   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:05.210005   86402 cri.go:89] found id: ""
	I1104 12:11:05.210025   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.210033   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:05.210041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:05.210085   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:05.244874   86402 cri.go:89] found id: ""
	I1104 12:11:05.244899   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.244908   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:05.244913   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:05.244956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:05.276517   86402 cri.go:89] found id: ""
	I1104 12:11:05.276547   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.276557   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:05.276568   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:05.276581   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:05.354057   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:05.354087   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:05.390848   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:05.390887   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:05.442659   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:05.442692   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:05.456290   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:05.456315   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:05.530310   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.030545   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:08.043598   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:08.043654   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:08.081604   86402 cri.go:89] found id: ""
	I1104 12:11:08.081634   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.081644   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:08.081652   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:08.081712   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:08.135357   86402 cri.go:89] found id: ""
	I1104 12:11:08.135388   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.135398   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:08.135405   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:08.135470   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:08.173275   86402 cri.go:89] found id: ""
	I1104 12:11:08.173298   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.173306   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:08.173311   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:08.173371   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:08.213415   86402 cri.go:89] found id: ""
	I1104 12:11:08.213439   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.213448   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:08.213454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:08.213507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:08.244759   86402 cri.go:89] found id: ""
	I1104 12:11:08.244791   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.244802   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:08.244809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:08.244870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:08.276643   86402 cri.go:89] found id: ""
	I1104 12:11:08.276666   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.276675   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:08.276682   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:08.276751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:08.308425   86402 cri.go:89] found id: ""
	I1104 12:11:08.308451   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.308462   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:08.308469   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:08.308527   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:08.340645   86402 cri.go:89] found id: ""
	I1104 12:11:08.340675   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.340687   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:08.340698   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:08.340712   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:08.413171   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.413196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:08.413214   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:08.496208   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:08.496246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:08.534527   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:08.534560   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:08.583515   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:08.583550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:11.099000   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:11.112158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:11.112236   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:11.145718   86402 cri.go:89] found id: ""
	I1104 12:11:11.145748   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.145758   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:11.145765   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:11.145958   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:11.177270   86402 cri.go:89] found id: ""
	I1104 12:11:11.177301   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.177317   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:11.177325   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:11.177396   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:11.209696   86402 cri.go:89] found id: ""
	I1104 12:11:11.209722   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.209737   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:11.209742   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:11.209789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:11.244034   86402 cri.go:89] found id: ""
	I1104 12:11:11.244061   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.244069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:11.244078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:11.244135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:11.276437   86402 cri.go:89] found id: ""
	I1104 12:11:11.276462   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.276470   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:11.276476   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:11.276530   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:11.308954   86402 cri.go:89] found id: ""
	I1104 12:11:11.308980   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.308988   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:11.308994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:11.309057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:11.342175   86402 cri.go:89] found id: ""
	I1104 12:11:11.342199   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.342207   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:11.342211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:11.342266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:11.374810   86402 cri.go:89] found id: ""
	I1104 12:11:11.374839   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.374851   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:11.374860   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:11.374875   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:11.443638   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:11.443667   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:11.443681   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:11.526996   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:11.527031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:11.568297   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:11.568325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:11.616229   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:11.616264   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:14.130707   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:14.143045   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:14.143116   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:14.185422   86402 cri.go:89] found id: ""
	I1104 12:11:14.185461   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.185471   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:14.185477   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:14.185525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:14.219890   86402 cri.go:89] found id: ""
	I1104 12:11:14.219918   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.219928   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:14.219938   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:14.219985   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:14.253256   86402 cri.go:89] found id: ""
	I1104 12:11:14.253286   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.253296   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:14.253304   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:14.253364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:14.286228   86402 cri.go:89] found id: ""
	I1104 12:11:14.286259   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.286271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:14.286279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:14.286342   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:14.317065   86402 cri.go:89] found id: ""
	I1104 12:11:14.317091   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.317101   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:14.317106   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:14.317168   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:14.348540   86402 cri.go:89] found id: ""
	I1104 12:11:14.348575   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.348583   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:14.348589   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:14.348647   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:14.380824   86402 cri.go:89] found id: ""
	I1104 12:11:14.380849   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.380858   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:14.380863   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:14.380924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:14.413757   86402 cri.go:89] found id: ""
	I1104 12:11:14.413785   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.413796   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:14.413806   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:14.413822   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:14.479311   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:14.479336   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:14.479349   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:14.572923   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:14.572959   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:14.620277   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:14.620359   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:14.674276   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:14.674310   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.187062   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:17.200179   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:17.200260   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:17.232208   86402 cri.go:89] found id: ""
	I1104 12:11:17.232231   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.232238   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:17.232244   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:17.232298   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:17.266224   86402 cri.go:89] found id: ""
	I1104 12:11:17.266248   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.266257   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:17.266262   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:17.266320   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:17.301909   86402 cri.go:89] found id: ""
	I1104 12:11:17.301940   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.301948   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:17.301953   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:17.302005   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:17.339493   86402 cri.go:89] found id: ""
	I1104 12:11:17.339517   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.339530   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:17.339537   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:17.339600   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:17.373879   86402 cri.go:89] found id: ""
	I1104 12:11:17.373927   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.373938   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:17.373945   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:17.373996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:17.405533   86402 cri.go:89] found id: ""
	I1104 12:11:17.405562   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.405573   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:17.405583   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:17.405645   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:17.439421   86402 cri.go:89] found id: ""
	I1104 12:11:17.439451   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.439460   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:17.439468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:17.439532   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:17.474573   86402 cri.go:89] found id: ""
	I1104 12:11:17.474602   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.474613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:17.474623   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:17.474636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:17.524497   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:17.524536   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.538421   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:17.538460   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:17.607299   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:17.607323   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:17.607337   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:17.684181   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:17.684224   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.223600   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:20.237793   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:20.237865   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:20.279656   86402 cri.go:89] found id: ""
	I1104 12:11:20.279682   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.279693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:20.279700   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:20.279767   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:20.337980   86402 cri.go:89] found id: ""
	I1104 12:11:20.338009   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.338020   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:20.338027   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:20.338087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:20.383183   86402 cri.go:89] found id: ""
	I1104 12:11:20.383217   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.383226   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:20.383231   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:20.383282   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:20.416470   86402 cri.go:89] found id: ""
	I1104 12:11:20.416495   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.416505   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:20.416512   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:20.416570   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:20.451968   86402 cri.go:89] found id: ""
	I1104 12:11:20.452000   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.452011   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:20.452017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:20.452074   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:20.484800   86402 cri.go:89] found id: ""
	I1104 12:11:20.484823   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.484831   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:20.484837   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:20.484893   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:20.516263   86402 cri.go:89] found id: ""
	I1104 12:11:20.516292   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.516300   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:20.516306   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:20.516364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:20.548616   86402 cri.go:89] found id: ""
	I1104 12:11:20.548640   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.548651   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:20.548661   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:20.548674   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:20.599338   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:20.599368   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:20.613116   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:20.613148   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:20.678898   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:20.678924   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:20.678936   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:20.757570   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:20.757606   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.293912   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:23.307037   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:23.307110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:23.341161   86402 cri.go:89] found id: ""
	I1104 12:11:23.341186   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.341195   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:23.341200   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:23.341277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:23.373462   86402 cri.go:89] found id: ""
	I1104 12:11:23.373491   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.373503   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:23.373510   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:23.373568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:23.404439   86402 cri.go:89] found id: ""
	I1104 12:11:23.404471   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.404482   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:23.404489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:23.404548   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:23.435224   86402 cri.go:89] found id: ""
	I1104 12:11:23.435256   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.435267   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:23.435274   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:23.435336   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:23.472593   86402 cri.go:89] found id: ""
	I1104 12:11:23.472622   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.472633   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:23.472641   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:23.472693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:23.503413   86402 cri.go:89] found id: ""
	I1104 12:11:23.503438   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.503447   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:23.503454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:23.503516   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:23.537582   86402 cri.go:89] found id: ""
	I1104 12:11:23.537610   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.537621   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:23.537628   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:23.537689   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:23.573799   86402 cri.go:89] found id: ""
	I1104 12:11:23.573824   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.573831   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:23.573838   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:23.573851   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:23.649239   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:23.649273   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.686518   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:23.686548   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:23.738955   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:23.738987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:23.751909   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:23.751935   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:23.827244   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.327902   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:26.339708   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:26.339784   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:26.369615   86402 cri.go:89] found id: ""
	I1104 12:11:26.369644   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.369653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:26.369659   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:26.369715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:26.402027   86402 cri.go:89] found id: ""
	I1104 12:11:26.402056   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.402065   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:26.402070   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:26.402123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:26.433483   86402 cri.go:89] found id: ""
	I1104 12:11:26.433512   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.433523   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:26.433529   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:26.433637   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:26.466403   86402 cri.go:89] found id: ""
	I1104 12:11:26.466442   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.466453   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:26.466468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:26.466524   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:26.499818   86402 cri.go:89] found id: ""
	I1104 12:11:26.499853   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.499864   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:26.499871   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:26.499930   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:26.537782   86402 cri.go:89] found id: ""
	I1104 12:11:26.537809   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.537822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:26.537830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:26.537890   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:26.574091   86402 cri.go:89] found id: ""
	I1104 12:11:26.574120   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.574131   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:26.574138   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:26.574199   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:26.607554   86402 cri.go:89] found id: ""
	I1104 12:11:26.607584   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.607596   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:26.607606   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:26.607620   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:26.657405   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:26.657443   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:26.670022   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:26.670046   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:26.736238   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.736266   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:26.736278   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:26.816277   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:26.816309   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:29.357639   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:29.371116   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:29.371204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:29.405569   86402 cri.go:89] found id: ""
	I1104 12:11:29.405595   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.405604   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:29.405611   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:29.405668   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:29.435669   86402 cri.go:89] found id: ""
	I1104 12:11:29.435697   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.435709   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:29.435716   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:29.435781   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:29.476208   86402 cri.go:89] found id: ""
	I1104 12:11:29.476236   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.476245   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:29.476251   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:29.476305   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:29.511446   86402 cri.go:89] found id: ""
	I1104 12:11:29.511474   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.511483   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:29.511489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:29.511541   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:29.543714   86402 cri.go:89] found id: ""
	I1104 12:11:29.543742   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.543754   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:29.543761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:29.543840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:29.577429   86402 cri.go:89] found id: ""
	I1104 12:11:29.577456   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.577466   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:29.577473   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:29.577534   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:29.608430   86402 cri.go:89] found id: ""
	I1104 12:11:29.608457   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.608475   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:29.608483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:29.608539   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:29.640029   86402 cri.go:89] found id: ""
	I1104 12:11:29.640057   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.640068   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:29.640078   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:29.640092   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:29.691170   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:29.691202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:29.704949   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:29.704987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:29.766856   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:29.766884   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:29.766898   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:29.847487   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:29.847525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.382925   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:32.395889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:32.395943   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:32.428711   86402 cri.go:89] found id: ""
	I1104 12:11:32.428736   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.428749   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:32.428755   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:32.428810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:32.463269   86402 cri.go:89] found id: ""
	I1104 12:11:32.463295   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.463307   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:32.463313   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:32.463372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:32.496098   86402 cri.go:89] found id: ""
	I1104 12:11:32.496125   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.496135   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:32.496142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:32.496213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:32.528729   86402 cri.go:89] found id: ""
	I1104 12:11:32.528760   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.528771   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:32.528778   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:32.528860   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:32.567290   86402 cri.go:89] found id: ""
	I1104 12:11:32.567321   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.567332   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:32.567338   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:32.567397   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:32.608932   86402 cri.go:89] found id: ""
	I1104 12:11:32.608962   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.608973   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:32.608980   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:32.609037   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:32.641128   86402 cri.go:89] found id: ""
	I1104 12:11:32.641155   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.641164   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:32.641171   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:32.641239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:32.675651   86402 cri.go:89] found id: ""
	I1104 12:11:32.675682   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.675694   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:32.675704   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:32.675719   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:32.742369   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:32.742406   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:32.742419   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:32.823371   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:32.823412   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.862243   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:32.862270   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:32.910961   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:32.910987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.425742   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:35.438553   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:35.438615   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:35.475160   86402 cri.go:89] found id: ""
	I1104 12:11:35.475189   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.475201   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:35.475209   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:35.475267   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:35.517193   86402 cri.go:89] found id: ""
	I1104 12:11:35.517239   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.517252   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:35.517260   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:35.517329   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:35.552941   86402 cri.go:89] found id: ""
	I1104 12:11:35.552967   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.552978   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:35.552985   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:35.553056   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:35.589960   86402 cri.go:89] found id: ""
	I1104 12:11:35.589983   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.589994   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:35.590001   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:35.590063   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:35.624546   86402 cri.go:89] found id: ""
	I1104 12:11:35.624575   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.624587   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:35.624595   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:35.624655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:35.657855   86402 cri.go:89] found id: ""
	I1104 12:11:35.657885   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.657896   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:35.657903   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:35.657957   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:35.691465   86402 cri.go:89] found id: ""
	I1104 12:11:35.691498   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.691509   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:35.691516   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:35.691587   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:35.727520   86402 cri.go:89] found id: ""
	I1104 12:11:35.727548   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.727558   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:35.727569   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:35.727584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:35.777876   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:35.777912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.790790   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:35.790817   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:35.856780   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:35.856805   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:35.856819   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:35.936769   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:35.936812   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:38.474827   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:38.488151   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:38.488221   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:38.523010   86402 cri.go:89] found id: ""
	I1104 12:11:38.523042   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.523053   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:38.523061   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:38.523117   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:38.558065   86402 cri.go:89] found id: ""
	I1104 12:11:38.558093   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.558102   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:38.558107   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:38.558153   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:38.590676   86402 cri.go:89] found id: ""
	I1104 12:11:38.590704   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.590715   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:38.590723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:38.590780   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:38.623762   86402 cri.go:89] found id: ""
	I1104 12:11:38.623793   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.623804   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:38.623811   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:38.623870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:38.655918   86402 cri.go:89] found id: ""
	I1104 12:11:38.655947   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.655958   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:38.655966   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:38.656028   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:38.691200   86402 cri.go:89] found id: ""
	I1104 12:11:38.691228   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.691238   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:38.691245   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:38.691302   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:38.724725   86402 cri.go:89] found id: ""
	I1104 12:11:38.724748   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.724756   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:38.724761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:38.724819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:38.756333   86402 cri.go:89] found id: ""
	I1104 12:11:38.756360   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.756370   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:38.756381   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:38.756395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:38.807722   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:38.807756   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:38.821055   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:38.821079   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:38.886629   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:38.886656   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:38.886671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:38.960958   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:38.960999   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:41.503471   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:41.515994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:41.516065   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:41.549936   86402 cri.go:89] found id: ""
	I1104 12:11:41.549960   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.549968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:41.549975   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:41.550033   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:41.584565   86402 cri.go:89] found id: ""
	I1104 12:11:41.584590   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.584602   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:41.584610   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:41.584660   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:41.616427   86402 cri.go:89] found id: ""
	I1104 12:11:41.616450   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.616458   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:41.616463   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:41.616510   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:41.650835   86402 cri.go:89] found id: ""
	I1104 12:11:41.650864   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.650875   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:41.650882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:41.650946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:41.685899   86402 cri.go:89] found id: ""
	I1104 12:11:41.685921   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.685928   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:41.685934   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:41.685979   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:41.718730   86402 cri.go:89] found id: ""
	I1104 12:11:41.718757   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.718773   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:41.718782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:41.718837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:41.748843   86402 cri.go:89] found id: ""
	I1104 12:11:41.748875   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.748887   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:41.748895   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:41.748963   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:41.780225   86402 cri.go:89] found id: ""
	I1104 12:11:41.780251   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.780260   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:41.780268   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:41.780285   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:41.830864   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:41.830893   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:41.844252   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:41.844279   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:41.908514   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:41.908542   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:41.908554   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:41.988545   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:41.988582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:44.527641   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:44.540026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:44.540108   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:44.574530   86402 cri.go:89] found id: ""
	I1104 12:11:44.574559   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.574570   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:44.574577   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:44.574638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:44.606073   86402 cri.go:89] found id: ""
	I1104 12:11:44.606103   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.606114   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:44.606121   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:44.606185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:44.639750   86402 cri.go:89] found id: ""
	I1104 12:11:44.639775   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.639784   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:44.639792   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:44.639850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:44.673528   86402 cri.go:89] found id: ""
	I1104 12:11:44.673557   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.673565   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:44.673573   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:44.673625   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:44.705928   86402 cri.go:89] found id: ""
	I1104 12:11:44.705956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.705966   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:44.705973   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:44.706032   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:44.736779   86402 cri.go:89] found id: ""
	I1104 12:11:44.736811   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.736822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:44.736830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:44.736886   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:44.769929   86402 cri.go:89] found id: ""
	I1104 12:11:44.769956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.769964   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:44.769970   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:44.770015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:44.800818   86402 cri.go:89] found id: ""
	I1104 12:11:44.800846   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.800855   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:44.800863   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:44.800873   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:44.853610   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:44.853641   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:44.866656   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:44.866683   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:44.936386   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:44.936412   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:44.936425   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:45.011789   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:45.011823   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.548672   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:47.563082   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:47.563157   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:47.598722   86402 cri.go:89] found id: ""
	I1104 12:11:47.598748   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.598756   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:47.598762   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:47.598809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:47.633376   86402 cri.go:89] found id: ""
	I1104 12:11:47.633412   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.633421   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:47.633428   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:47.633486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:47.666059   86402 cri.go:89] found id: ""
	I1104 12:11:47.666087   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.666095   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:47.666101   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:47.666147   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:47.700659   86402 cri.go:89] found id: ""
	I1104 12:11:47.700690   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.700704   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:47.700711   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:47.700771   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:47.732901   86402 cri.go:89] found id: ""
	I1104 12:11:47.732927   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.732934   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:47.732940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:47.732984   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:47.765371   86402 cri.go:89] found id: ""
	I1104 12:11:47.765398   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.765418   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:47.765425   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:47.765487   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:47.797043   86402 cri.go:89] found id: ""
	I1104 12:11:47.797077   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.797089   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:47.797096   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:47.797159   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:47.828140   86402 cri.go:89] found id: ""
	I1104 12:11:47.828172   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.828184   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:47.828194   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:47.828208   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:47.911398   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:47.911434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.948042   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:47.948071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:47.999603   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:47.999638   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:48.013818   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:48.013856   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:48.082679   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.583325   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:50.595272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:50.595346   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:50.630857   86402 cri.go:89] found id: ""
	I1104 12:11:50.630883   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.630892   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:50.630899   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:50.630965   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:50.663025   86402 cri.go:89] found id: ""
	I1104 12:11:50.663049   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.663058   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:50.663063   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:50.663109   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:50.695371   86402 cri.go:89] found id: ""
	I1104 12:11:50.695402   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.695413   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:50.695421   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:50.695480   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:50.728805   86402 cri.go:89] found id: ""
	I1104 12:11:50.728827   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.728836   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:50.728841   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:50.728902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:50.762837   86402 cri.go:89] found id: ""
	I1104 12:11:50.762868   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.762878   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:50.762885   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:50.762941   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:50.802531   86402 cri.go:89] found id: ""
	I1104 12:11:50.802556   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.802564   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:50.802569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:50.802613   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:50.835124   86402 cri.go:89] found id: ""
	I1104 12:11:50.835161   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.835173   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:50.835180   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:50.835234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:50.869265   86402 cri.go:89] found id: ""
	I1104 12:11:50.869295   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.869308   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:50.869318   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:50.869330   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:50.919371   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:50.919405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:50.932165   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:50.932195   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:50.993935   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.993959   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:50.993972   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:51.071816   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:51.071848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:53.608347   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:53.620842   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:53.620902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:53.652870   86402 cri.go:89] found id: ""
	I1104 12:11:53.652896   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.652909   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:53.652917   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:53.652980   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:53.684842   86402 cri.go:89] found id: ""
	I1104 12:11:53.684878   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.684889   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:53.684897   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:53.684956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:53.722505   86402 cri.go:89] found id: ""
	I1104 12:11:53.722531   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.722539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:53.722544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:53.722603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:53.753831   86402 cri.go:89] found id: ""
	I1104 12:11:53.753858   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.753866   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:53.753872   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:53.753918   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:53.786112   86402 cri.go:89] found id: ""
	I1104 12:11:53.786139   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.786150   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:53.786157   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:53.786218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:53.820446   86402 cri.go:89] found id: ""
	I1104 12:11:53.820472   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.820487   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:53.820493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:53.820552   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:53.855631   86402 cri.go:89] found id: ""
	I1104 12:11:53.855655   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.855665   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:53.855673   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:53.855727   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:53.887953   86402 cri.go:89] found id: ""
	I1104 12:11:53.887983   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.887994   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:53.888004   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:53.888023   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:53.954408   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:53.954430   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:53.954442   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:54.028549   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:54.028584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:54.070869   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:54.070895   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:54.123676   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:54.123715   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:56.639480   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:56.652651   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:56.652709   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:56.689397   86402 cri.go:89] found id: ""
	I1104 12:11:56.689425   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.689443   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:56.689452   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:56.689517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:56.725197   86402 cri.go:89] found id: ""
	I1104 12:11:56.725234   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.725246   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:56.725254   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:56.725308   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:56.759043   86402 cri.go:89] found id: ""
	I1104 12:11:56.759073   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.759084   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:56.759090   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:56.759141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:56.792268   86402 cri.go:89] found id: ""
	I1104 12:11:56.792296   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.792307   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:56.792314   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:56.792375   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:56.823668   86402 cri.go:89] found id: ""
	I1104 12:11:56.823692   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.823702   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:56.823709   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:56.823769   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:56.861812   86402 cri.go:89] found id: ""
	I1104 12:11:56.861837   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.861845   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:56.861851   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:56.861902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:56.894037   86402 cri.go:89] found id: ""
	I1104 12:11:56.894067   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.894075   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:56.894080   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:56.894133   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:56.925603   86402 cri.go:89] found id: ""
	I1104 12:11:56.925634   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.925646   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:56.925656   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:56.925669   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:56.961504   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:56.961530   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:57.012666   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:57.012700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:57.025887   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:57.025921   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:57.097219   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:57.097257   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:57.097272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:59.671179   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:59.684642   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:59.684718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:59.721599   86402 cri.go:89] found id: ""
	I1104 12:11:59.721622   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.721631   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:59.721640   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:59.721693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:59.757423   86402 cri.go:89] found id: ""
	I1104 12:11:59.757453   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.757461   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:59.757466   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:59.757525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:59.794036   86402 cri.go:89] found id: ""
	I1104 12:11:59.794071   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.794081   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:59.794089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:59.794148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:59.830098   86402 cri.go:89] found id: ""
	I1104 12:11:59.830123   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.830134   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:59.830142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:59.830207   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:59.867791   86402 cri.go:89] found id: ""
	I1104 12:11:59.867815   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.867823   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:59.867828   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:59.867879   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:59.903579   86402 cri.go:89] found id: ""
	I1104 12:11:59.903607   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.903614   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:59.903620   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:59.903667   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:59.940955   86402 cri.go:89] found id: ""
	I1104 12:11:59.940977   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.940984   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:59.940989   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:59.941034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:59.977626   86402 cri.go:89] found id: ""
	I1104 12:11:59.977653   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.977663   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:59.977674   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:59.977687   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:00.032280   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:00.032312   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:00.045965   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:00.045991   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:00.123578   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:00.123608   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:00.123625   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:00.208309   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:00.208340   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:02.746303   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:02.758892   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:02.758967   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:02.792775   86402 cri.go:89] found id: ""
	I1104 12:12:02.792803   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.792815   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:02.792822   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:02.792878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:02.831073   86402 cri.go:89] found id: ""
	I1104 12:12:02.831097   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.831108   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:02.831115   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:02.831174   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:02.863530   86402 cri.go:89] found id: ""
	I1104 12:12:02.863557   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.863568   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:02.863574   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:02.863641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.894894   86402 cri.go:89] found id: ""
	I1104 12:12:02.894924   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.894934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:02.894942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.894996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.930052   86402 cri.go:89] found id: ""
	I1104 12:12:02.930081   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.930092   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:02.930100   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.930160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.964503   86402 cri.go:89] found id: ""
	I1104 12:12:02.964532   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.964544   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:02.964551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.964610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.998065   86402 cri.go:89] found id: ""
	I1104 12:12:02.998088   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.998096   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.998102   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:02.998148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:03.033579   86402 cri.go:89] found id: ""
	I1104 12:12:03.033604   86402 logs.go:282] 0 containers: []
	W1104 12:12:03.033613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:03.033621   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:03.033630   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:03.086215   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:03.086249   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:03.100100   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.100136   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:03.168116   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:03.168150   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:03.168165   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:03.253608   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:03.253642   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:05.792913   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.806494   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.806568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.854379   86402 cri.go:89] found id: ""
	I1104 12:12:05.854406   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.854417   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:05.854425   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.854503   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.886144   86402 cri.go:89] found id: ""
	I1104 12:12:05.886169   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.886179   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:05.886186   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.886248   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.917462   86402 cri.go:89] found id: ""
	I1104 12:12:05.917482   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.917492   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:05.917499   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.917550   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:05.954065   86402 cri.go:89] found id: ""
	I1104 12:12:05.954099   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.954110   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:05.954120   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:05.954194   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:05.990935   86402 cri.go:89] found id: ""
	I1104 12:12:05.990966   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.990977   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:05.990984   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:05.991050   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.032175   86402 cri.go:89] found id: ""
	I1104 12:12:06.032198   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.032206   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:06.032211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.032269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.069215   86402 cri.go:89] found id: ""
	I1104 12:12:06.069262   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.069275   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.069282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:06.069340   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:06.103065   86402 cri.go:89] found id: ""
	I1104 12:12:06.103106   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.103117   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:06.103127   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.103145   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:06.184111   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:06.184135   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.184149   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.272720   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.272760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.315596   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.315636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:06.376054   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.376110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:08.890463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:08.904272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:08.904354   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:08.935677   86402 cri.go:89] found id: ""
	I1104 12:12:08.935701   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.935710   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:08.935715   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:08.935761   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:08.966969   86402 cri.go:89] found id: ""
	I1104 12:12:08.966993   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.967004   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:08.967011   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:08.967072   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:08.998753   86402 cri.go:89] found id: ""
	I1104 12:12:08.998778   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.998786   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:08.998790   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:08.998852   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.031901   86402 cri.go:89] found id: ""
	I1104 12:12:09.031925   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.031934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:09.031940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.032000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.071478   86402 cri.go:89] found id: ""
	I1104 12:12:09.071500   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.071508   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:09.071513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.071564   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.107593   86402 cri.go:89] found id: ""
	I1104 12:12:09.107621   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.107629   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:09.107635   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.107693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.140899   86402 cri.go:89] found id: ""
	I1104 12:12:09.140923   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.140934   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.140942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:09.141000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:09.174279   86402 cri.go:89] found id: ""
	I1104 12:12:09.174307   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.174318   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:09.174330   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:09.174405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:09.226340   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:09.226371   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:09.239573   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:09.239600   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:09.306180   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:09.306201   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:09.306212   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:09.385039   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:09.385072   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:11.924105   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:11.936623   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:11.936685   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:11.968026   86402 cri.go:89] found id: ""
	I1104 12:12:11.968056   86402 logs.go:282] 0 containers: []
	W1104 12:12:11.968067   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:11.968074   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:11.968139   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:12.001193   86402 cri.go:89] found id: ""
	I1104 12:12:12.001218   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.001245   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:12.001252   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:12.001311   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:12.035167   86402 cri.go:89] found id: ""
	I1104 12:12:12.035190   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.035199   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:12.035204   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:12.035250   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:12.068412   86402 cri.go:89] found id: ""
	I1104 12:12:12.068440   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.068450   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:12.068458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:12.068515   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:12.099965   86402 cri.go:89] found id: ""
	I1104 12:12:12.099991   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.100002   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:12.100009   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:12.100066   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:12.133413   86402 cri.go:89] found id: ""
	I1104 12:12:12.133442   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.133453   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:12.133460   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:12.133520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:12.169007   86402 cri.go:89] found id: ""
	I1104 12:12:12.169036   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.169046   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:12.169053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:12.169112   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:12.200592   86402 cri.go:89] found id: ""
	I1104 12:12:12.200621   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.200635   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:12.200643   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:12.200657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:12.244609   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:12.244644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:12.299770   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:12.299804   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:12.324354   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:12.324395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:12.385605   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:12.385632   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:12.385661   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:14.964867   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:14.977918   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:14.977991   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:15.012865   86402 cri.go:89] found id: ""
	I1104 12:12:15.012894   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.012906   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:15.012913   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:15.012977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:15.046548   86402 cri.go:89] found id: ""
	I1104 12:12:15.046574   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.046583   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:15.046589   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:15.046636   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:15.079310   86402 cri.go:89] found id: ""
	I1104 12:12:15.079336   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.079347   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:15.079353   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:15.079412   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:15.110595   86402 cri.go:89] found id: ""
	I1104 12:12:15.110625   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.110636   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:15.110648   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:15.110716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:15.143362   86402 cri.go:89] found id: ""
	I1104 12:12:15.143391   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.143403   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:15.143410   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:15.143533   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:15.173973   86402 cri.go:89] found id: ""
	I1104 12:12:15.174000   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.174009   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:15.174017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:15.174081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:15.205021   86402 cri.go:89] found id: ""
	I1104 12:12:15.205049   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.205060   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:15.205067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:15.205113   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:15.240190   86402 cri.go:89] found id: ""
	I1104 12:12:15.240220   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.240231   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:15.240249   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:15.240263   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:15.290208   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:15.290241   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:15.305216   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:15.305258   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:15.375713   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:15.375735   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:15.375746   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:15.456517   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:15.456552   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:17.992855   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:18.011370   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:18.011446   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:18.054937   86402 cri.go:89] found id: ""
	I1104 12:12:18.054961   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.054968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:18.054974   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:18.055026   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:18.107769   86402 cri.go:89] found id: ""
	I1104 12:12:18.107802   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.107814   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:18.107821   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:18.107887   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:18.141932   86402 cri.go:89] found id: ""
	I1104 12:12:18.141959   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.141968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:18.141974   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:18.142021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:18.174322   86402 cri.go:89] found id: ""
	I1104 12:12:18.174345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.174353   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:18.174361   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:18.174514   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:18.206742   86402 cri.go:89] found id: ""
	I1104 12:12:18.206766   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.206776   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:18.206782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:18.206840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:18.240322   86402 cri.go:89] found id: ""
	I1104 12:12:18.240345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.240358   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:18.240363   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:18.240420   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:18.272081   86402 cri.go:89] found id: ""
	I1104 12:12:18.272110   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.272121   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:18.272128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:18.272211   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:18.308604   86402 cri.go:89] found id: ""
	I1104 12:12:18.308629   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.308637   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:18.308646   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:18.308655   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:18.392854   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:18.392892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:18.429632   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:18.429665   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:18.481082   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:18.481120   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:18.494730   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:18.494758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:18.562098   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.063223   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:21.075655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:21.075714   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:21.117762   86402 cri.go:89] found id: ""
	I1104 12:12:21.117794   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.117807   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:21.117817   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:21.117881   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:21.153256   86402 cri.go:89] found id: ""
	I1104 12:12:21.153281   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.153289   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:21.153295   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:21.153355   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:21.191477   86402 cri.go:89] found id: ""
	I1104 12:12:21.191519   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.191539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:21.191547   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:21.191618   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:21.228378   86402 cri.go:89] found id: ""
	I1104 12:12:21.228411   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.228424   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:21.228431   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:21.228495   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:21.265452   86402 cri.go:89] found id: ""
	I1104 12:12:21.265483   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.265493   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:21.265501   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:21.265561   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:21.301073   86402 cri.go:89] found id: ""
	I1104 12:12:21.301099   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.301108   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:21.301114   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:21.301182   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:21.337952   86402 cri.go:89] found id: ""
	I1104 12:12:21.337977   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.337986   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:21.337996   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:21.338053   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:21.371895   86402 cri.go:89] found id: ""
	I1104 12:12:21.371920   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.371929   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:21.371937   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:21.371950   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:21.429757   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:21.429789   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:21.444365   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.444418   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:21.510971   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.510990   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:21.511002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.593605   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:21.593639   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:24.130961   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:24.143387   86402 kubeadm.go:597] duration metric: took 4m4.25221988s to restartPrimaryControlPlane
	W1104 12:12:24.143472   86402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1104 12:12:24.143499   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:12:28.876306   86402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.732783523s)
	I1104 12:12:28.876377   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:28.890455   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:12:28.899660   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:12:28.908658   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:12:28.908675   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:12:28.908715   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:12:28.916955   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:12:28.917013   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:12:28.927198   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:12:28.936868   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:12:28.936924   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:12:28.947246   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.956962   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:12:28.957015   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.967293   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:12:28.976975   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:12:28.977030   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:12:28.988547   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:12:29.198333   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:14:25.090113   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:14:25.090254   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:14:25.091997   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.092065   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.092204   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.092341   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.092480   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:25.092569   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:25.094485   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:25.094582   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:25.094664   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:25.094799   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:25.094891   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:25.095003   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:25.095086   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:25.095186   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:25.095240   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:25.095319   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:25.095403   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:25.095481   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:25.095554   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:25.095614   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:25.095676   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:25.095752   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:25.095828   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:25.095970   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:25.096102   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:25.096169   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:25.096262   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:25.097799   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:25.097920   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:25.098018   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:25.098126   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:25.098211   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:25.098333   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:14:25.098393   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:14:25.098487   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098633   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.098690   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098940   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099074   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099307   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099370   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099528   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099582   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099740   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099758   86402 kubeadm.go:310] 
	I1104 12:14:25.099815   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:14:25.099880   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:14:25.099889   86402 kubeadm.go:310] 
	I1104 12:14:25.099923   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:14:25.099952   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:14:25.100036   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:14:25.100044   86402 kubeadm.go:310] 
	I1104 12:14:25.100197   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:14:25.100237   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:14:25.100267   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:14:25.100273   86402 kubeadm.go:310] 
	I1104 12:14:25.100367   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:14:25.100454   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:14:25.100468   86402 kubeadm.go:310] 
	I1104 12:14:25.100600   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:14:25.100718   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:14:25.100821   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:14:25.100903   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:14:25.100970   86402 kubeadm.go:310] 
	W1104 12:14:25.101033   86402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:14:25.101071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:14:25.536184   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:14:25.550453   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:14:25.560308   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:14:25.560327   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:14:25.560368   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:14:25.569106   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:14:25.569189   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:14:25.578395   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:14:25.587402   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:14:25.587473   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:14:25.596827   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.605359   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:14:25.605420   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.614266   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:14:25.622522   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:14:25.622582   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:14:25.631876   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:14:25.701080   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.701168   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.833997   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.834138   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.834258   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:26.009165   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:26.011976   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:26.012090   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:26.012183   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:26.012333   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:26.012422   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:26.012532   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:26.012619   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:26.012689   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:26.012748   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:26.012851   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:26.012978   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:26.013025   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:26.013102   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:26.399153   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:26.470449   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:27.078991   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:27.181622   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:27.205149   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:27.205300   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:27.205383   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:27.355614   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:27.357678   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:27.357840   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:27.363942   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:27.365004   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:27.367237   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:27.368087   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:15:07.369845   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:15:07.370222   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:07.370464   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:12.370802   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:12.371041   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:22.371417   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:22.371584   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:42.371725   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:42.371932   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.370871   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:16:22.371150   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.371181   86402 kubeadm.go:310] 
	I1104 12:16:22.371222   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:16:22.371297   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:16:22.371309   86402 kubeadm.go:310] 
	I1104 12:16:22.371371   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:16:22.371435   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:16:22.371576   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:16:22.371588   86402 kubeadm.go:310] 
	I1104 12:16:22.371726   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:16:22.371784   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:16:22.371863   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:16:22.371879   86402 kubeadm.go:310] 
	I1104 12:16:22.372004   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:16:22.372155   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:16:22.372172   86402 kubeadm.go:310] 
	I1104 12:16:22.372338   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:16:22.372435   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:16:22.372566   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:16:22.372680   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:16:22.372718   86402 kubeadm.go:310] 
	I1104 12:16:22.372948   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:16:22.373110   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:16:22.373289   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:16:22.373328   86402 kubeadm.go:394] duration metric: took 8m2.53443537s to StartCluster
	I1104 12:16:22.373379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:16:22.373431   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:16:22.410373   86402 cri.go:89] found id: ""
	I1104 12:16:22.410409   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.410418   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:16:22.410424   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:16:22.410485   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:16:22.447939   86402 cri.go:89] found id: ""
	I1104 12:16:22.447963   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.447971   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:16:22.447977   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:16:22.448021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:16:22.479234   86402 cri.go:89] found id: ""
	I1104 12:16:22.479263   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.479274   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:16:22.479280   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:16:22.479341   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:16:22.512783   86402 cri.go:89] found id: ""
	I1104 12:16:22.512814   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.512825   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:16:22.512832   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:16:22.512895   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:16:22.549483   86402 cri.go:89] found id: ""
	I1104 12:16:22.549510   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.549520   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:16:22.549527   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:16:22.549593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:16:22.582339   86402 cri.go:89] found id: ""
	I1104 12:16:22.582382   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.582393   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:16:22.582402   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:16:22.582471   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:16:22.613545   86402 cri.go:89] found id: ""
	I1104 12:16:22.613574   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.613585   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:16:22.613593   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:16:22.613656   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:16:22.644488   86402 cri.go:89] found id: ""
	I1104 12:16:22.644517   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.644528   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:16:22.644539   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:16:22.644551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:16:22.681138   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:16:22.681169   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:16:22.734551   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:16:22.734586   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:16:22.750140   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:16:22.750178   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:16:22.837631   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:16:22.837657   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:16:22.837673   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1104 12:16:22.961154   86402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:16:22.961221   86402 out.go:270] * 
	* 
	W1104 12:16:22.961295   86402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.961310   86402 out.go:270] * 
	* 
	W1104 12:16:22.962053   86402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:16:22.965021   86402 out.go:201] 
	W1104 12:16:22.966262   86402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.966326   86402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:16:22.966377   86402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:16:22.967953   86402 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-589257 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (243.546882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-589257 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-589257 logs -n 25: (1.537771077s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:04:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:04:21.684777   86402 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:04:21.684885   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.684893   86402 out.go:358] Setting ErrFile to fd 2...
	I1104 12:04:21.684897   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.685085   86402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:04:21.685618   86402 out.go:352] Setting JSON to false
	I1104 12:04:21.686501   86402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10013,"bootTime":1730711849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:04:21.686603   86402 start.go:139] virtualization: kvm guest
	I1104 12:04:21.688652   86402 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:04:21.690121   86402 notify.go:220] Checking for updates...
	I1104 12:04:21.690173   86402 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:04:21.691712   86402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:04:21.693100   86402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:04:21.694334   86402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:04:21.695431   86402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:04:21.696680   86402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:04:21.698271   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:04:21.698697   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.698738   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.713382   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I1104 12:04:21.713861   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.714357   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.714378   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.714696   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.714872   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.716711   86402 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 12:04:21.718136   86402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:04:21.718573   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.718617   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.733074   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1104 12:04:21.733525   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.733939   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.733955   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.734252   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.734410   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.770049   86402 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:04:21.771735   86402 start.go:297] selected driver: kvm2
	I1104 12:04:21.771748   86402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.771851   86402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:04:21.772615   86402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.772709   86402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:04:21.787662   86402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:04:21.788158   86402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:04:21.788201   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:04:21.788238   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:04:21.788282   86402 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.788422   86402 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.790364   86402 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 12:04:20.849476   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:20.393451   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:04:20.393484   86301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:20.393492   86301 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:20.393580   86301 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:20.393594   86301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:04:20.393670   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:04:20.393863   86301 start.go:360] acquireMachinesLock for default-k8s-diff-port-036892: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:21.791568   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:04:21.791599   86402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:21.791608   86402 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:21.791668   86402 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:21.791678   86402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 12:04:21.791755   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:04:21.791918   86402 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:26.929512   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:30.001546   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:36.081486   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:39.153496   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:45.233535   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:48.305510   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:54.385555   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:57.457513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:03.537513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:06.609487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:12.689475   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:15.761508   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:21.841502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:24.913609   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:30.993499   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:34.065502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:40.145511   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:43.217478   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:49.297518   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:52.369526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:58.449509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:01.521498   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:07.601506   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:10.673509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:16.753487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:19.825549   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:25.905526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:28.977526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:35.057466   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:38.129670   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:44.209517   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:47.281541   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:53.361542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:56.433564   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:02.513462   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:05.585513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:11.665480   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:14.737542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:17.742001   85759 start.go:364] duration metric: took 4m26.438155925s to acquireMachinesLock for "embed-certs-325116"
	I1104 12:07:17.742060   85759 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:17.742068   85759 fix.go:54] fixHost starting: 
	I1104 12:07:17.742418   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:17.742470   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:17.758611   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I1104 12:07:17.759173   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:17.759750   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:17.759774   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:17.760116   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:17.760326   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:17.760498   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:17.762313   85759 fix.go:112] recreateIfNeeded on embed-certs-325116: state=Stopped err=<nil>
	I1104 12:07:17.762335   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	W1104 12:07:17.762469   85759 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:17.764411   85759 out.go:177] * Restarting existing kvm2 VM for "embed-certs-325116" ...
	I1104 12:07:17.739255   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:17.739306   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739691   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:07:17.739718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739888   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:07:17.741864   85500 machine.go:96] duration metric: took 4m37.421766695s to provisionDockerMachine
	I1104 12:07:17.741908   85500 fix.go:56] duration metric: took 4m37.442993443s for fixHost
	I1104 12:07:17.741918   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 4m37.443015642s
	W1104 12:07:17.741938   85500 start.go:714] error starting host: provision: host is not running
	W1104 12:07:17.742034   85500 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1104 12:07:17.742044   85500 start.go:729] Will try again in 5 seconds ...
	I1104 12:07:17.765958   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Start
	I1104 12:07:17.766220   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring networks are active...
	I1104 12:07:17.767191   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network default is active
	I1104 12:07:17.767589   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network mk-embed-certs-325116 is active
	I1104 12:07:17.767984   85759 main.go:141] libmachine: (embed-certs-325116) Getting domain xml...
	I1104 12:07:17.768804   85759 main.go:141] libmachine: (embed-certs-325116) Creating domain...
	I1104 12:07:18.996135   85759 main.go:141] libmachine: (embed-certs-325116) Waiting to get IP...
	I1104 12:07:18.997002   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:18.997542   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:18.997615   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:18.997513   87021 retry.go:31] will retry after 239.606839ms: waiting for machine to come up
	I1104 12:07:19.239054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.239579   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.239602   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.239528   87021 retry.go:31] will retry after 303.459257ms: waiting for machine to come up
	I1104 12:07:19.545134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.545597   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.545633   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.545544   87021 retry.go:31] will retry after 394.511523ms: waiting for machine to come up
	I1104 12:07:19.942226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.942607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.942630   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.942576   87021 retry.go:31] will retry after 381.618515ms: waiting for machine to come up
	I1104 12:07:20.326265   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.326707   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.326738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.326651   87021 retry.go:31] will retry after 584.226748ms: waiting for machine to come up
	I1104 12:07:20.912117   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.912575   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.912607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.912524   87021 retry.go:31] will retry after 770.080519ms: waiting for machine to come up
	I1104 12:07:22.742250   85500 start.go:360] acquireMachinesLock for no-preload-908370: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:07:21.684620   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:21.685074   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:21.685103   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:21.685026   87021 retry.go:31] will retry after 1.170018806s: waiting for machine to come up
	I1104 12:07:22.856736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:22.857104   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:22.857132   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:22.857048   87021 retry.go:31] will retry after 1.467304538s: waiting for machine to come up
	I1104 12:07:24.326735   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:24.327197   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:24.327220   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:24.327148   87021 retry.go:31] will retry after 1.676202737s: waiting for machine to come up
	I1104 12:07:26.006035   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:26.006515   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:26.006538   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:26.006460   87021 retry.go:31] will retry after 1.8778328s: waiting for machine to come up
	I1104 12:07:27.886226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:27.886634   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:27.886656   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:27.886579   87021 retry.go:31] will retry after 2.886548821s: waiting for machine to come up
	I1104 12:07:30.776677   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:30.777080   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:30.777102   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:30.777039   87021 retry.go:31] will retry after 3.108966144s: waiting for machine to come up
	I1104 12:07:35.049920   86301 start.go:364] duration metric: took 3m14.656022924s to acquireMachinesLock for "default-k8s-diff-port-036892"
	I1104 12:07:35.050007   86301 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:35.050019   86301 fix.go:54] fixHost starting: 
	I1104 12:07:35.050381   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:35.050436   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:35.067928   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I1104 12:07:35.068445   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:35.068953   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:07:35.068976   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:35.069353   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:35.069560   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:35.069692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:07:35.071231   86301 fix.go:112] recreateIfNeeded on default-k8s-diff-port-036892: state=Stopped err=<nil>
	I1104 12:07:35.071252   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	W1104 12:07:35.071401   86301 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:35.073762   86301 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-036892" ...
	I1104 12:07:35.075114   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Start
	I1104 12:07:35.075311   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring networks are active...
	I1104 12:07:35.076105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network default is active
	I1104 12:07:35.076534   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network mk-default-k8s-diff-port-036892 is active
	I1104 12:07:35.076946   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Getting domain xml...
	I1104 12:07:35.077641   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Creating domain...
	I1104 12:07:33.887738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888147   85759 main.go:141] libmachine: (embed-certs-325116) Found IP for machine: 192.168.39.47
	I1104 12:07:33.888176   85759 main.go:141] libmachine: (embed-certs-325116) Reserving static IP address...
	I1104 12:07:33.888206   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has current primary IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888737   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.888769   85759 main.go:141] libmachine: (embed-certs-325116) DBG | skip adding static IP to network mk-embed-certs-325116 - found existing host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"}
	I1104 12:07:33.888783   85759 main.go:141] libmachine: (embed-certs-325116) Reserved static IP address: 192.168.39.47
	I1104 12:07:33.888795   85759 main.go:141] libmachine: (embed-certs-325116) Waiting for SSH to be available...
	I1104 12:07:33.888812   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Getting to WaitForSSH function...
	I1104 12:07:33.891130   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891493   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.891520   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891670   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH client type: external
	I1104 12:07:33.891693   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa (-rw-------)
	I1104 12:07:33.891732   85759 main.go:141] libmachine: (embed-certs-325116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:33.891748   85759 main.go:141] libmachine: (embed-certs-325116) DBG | About to run SSH command:
	I1104 12:07:33.891773   85759 main.go:141] libmachine: (embed-certs-325116) DBG | exit 0
	I1104 12:07:34.012989   85759 main.go:141] libmachine: (embed-certs-325116) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:34.013457   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetConfigRaw
	I1104 12:07:34.014162   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.016645   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017028   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.017062   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017347   85759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/config.json ...
	I1104 12:07:34.017577   85759 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:34.017596   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.017824   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.020134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020416   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.020449   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020570   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.020745   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.020905   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.021059   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.021313   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.021505   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.021515   85759 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:34.125266   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:34.125305   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125556   85759 buildroot.go:166] provisioning hostname "embed-certs-325116"
	I1104 12:07:34.125583   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125781   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.128180   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128486   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.128514   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128603   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.128758   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128890   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.129166   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.129371   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.129394   85759 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-325116 && echo "embed-certs-325116" | sudo tee /etc/hostname
	I1104 12:07:34.242027   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-325116
	
	I1104 12:07:34.242054   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.244671   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.244984   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.245019   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.245159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.245337   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245514   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245661   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.245810   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.245971   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.245986   85759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-325116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-325116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-325116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:34.357178   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:34.357204   85759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:34.357220   85759 buildroot.go:174] setting up certificates
	I1104 12:07:34.357241   85759 provision.go:84] configureAuth start
	I1104 12:07:34.357250   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.357533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.359993   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360308   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.360327   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.362459   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362750   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.362786   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362932   85759 provision.go:143] copyHostCerts
	I1104 12:07:34.362986   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:34.363022   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:34.363109   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:34.363231   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:34.363242   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:34.363282   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:34.363357   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:34.363368   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:34.363399   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:34.363503   85759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.embed-certs-325116 san=[127.0.0.1 192.168.39.47 embed-certs-325116 localhost minikube]
	I1104 12:07:34.453223   85759 provision.go:177] copyRemoteCerts
	I1104 12:07:34.453295   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:34.453317   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.455736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456022   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.456054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456230   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.456406   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.456539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.456631   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.539172   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:34.561889   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:07:34.585111   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:07:34.607449   85759 provision.go:87] duration metric: took 250.195255ms to configureAuth
	I1104 12:07:34.607495   85759 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:34.607809   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:34.607952   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.610672   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611009   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.611032   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611253   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.611444   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611600   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611739   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.611917   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.612086   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.612101   85759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:34.823086   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:34.823114   85759 machine.go:96] duration metric: took 805.522353ms to provisionDockerMachine
	I1104 12:07:34.823128   85759 start.go:293] postStartSetup for "embed-certs-325116" (driver="kvm2")
	I1104 12:07:34.823138   85759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:34.823174   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.823451   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:34.823489   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.826112   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826453   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.826482   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826581   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.826756   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.826886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.826998   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.907354   85759 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:34.911229   85759 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:34.911246   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:34.911316   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:34.911402   85759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:34.911516   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:34.920149   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:34.942468   85759 start.go:296] duration metric: took 119.32654ms for postStartSetup
	I1104 12:07:34.942517   85759 fix.go:56] duration metric: took 17.200448721s for fixHost
	I1104 12:07:34.942540   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.945295   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945659   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.945685   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945847   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.946006   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946173   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946311   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.946442   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.946583   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.946592   85759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:35.049767   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722055.017047529
	
	I1104 12:07:35.049790   85759 fix.go:216] guest clock: 1730722055.017047529
	I1104 12:07:35.049797   85759 fix.go:229] Guest: 2024-11-04 12:07:35.017047529 +0000 UTC Remote: 2024-11-04 12:07:34.942522008 +0000 UTC m=+283.781167350 (delta=74.525521ms)
	I1104 12:07:35.049829   85759 fix.go:200] guest clock delta is within tolerance: 74.525521ms
	I1104 12:07:35.049834   85759 start.go:83] releasing machines lock for "embed-certs-325116", held for 17.307794416s
	I1104 12:07:35.049859   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.050137   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:35.052845   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053238   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.053269   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054239   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054337   85759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:35.054383   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.054502   85759 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:35.054539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.057289   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057391   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057733   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057778   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057802   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057820   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057959   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.057996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.058110   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058296   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058316   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.058658   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.134602   85759 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:35.158961   85759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:35.303038   85759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:35.309611   85759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:35.309674   85759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:35.325082   85759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:35.325142   85759 start.go:495] detecting cgroup driver to use...
	I1104 12:07:35.325211   85759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:35.341673   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:35.355506   85759 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:35.355569   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:35.369017   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:35.382745   85759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:35.498985   85759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:35.648628   85759 docker.go:233] disabling docker service ...
	I1104 12:07:35.648702   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:35.666912   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:35.679786   85759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:35.799284   85759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:35.931842   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:35.945707   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:35.965183   85759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:35.965269   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.975446   85759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:35.975514   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.985968   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.996462   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.006840   85759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:36.017174   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.027013   85759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.044572   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.054046   85759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:36.063355   85759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:36.063399   85759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:36.075157   85759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:36.084713   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:36.205088   85759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:36.299330   85759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:36.299423   85759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:36.304194   85759 start.go:563] Will wait 60s for crictl version
	I1104 12:07:36.304248   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:07:36.308041   85759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:36.349114   85759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:36.349264   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.378677   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.406751   85759 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:36.335603   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting to get IP...
	I1104 12:07:36.336431   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.336921   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.337007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.336911   87142 retry.go:31] will retry after 289.750795ms: waiting for machine to come up
	I1104 12:07:36.628712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629301   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.629345   87142 retry.go:31] will retry after 356.596321ms: waiting for machine to come up
	I1104 12:07:36.988173   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988663   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988713   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.988626   87142 retry.go:31] will retry after 446.62367ms: waiting for machine to come up
	I1104 12:07:37.437529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438120   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438174   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.438023   87142 retry.go:31] will retry after 482.072639ms: waiting for machine to come up
	I1104 12:07:37.921514   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922025   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922056   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.921983   87142 retry.go:31] will retry after 645.10615ms: waiting for machine to come up
	I1104 12:07:38.569009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569524   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569566   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:38.569432   87142 retry.go:31] will retry after 841.352802ms: waiting for machine to come up
	I1104 12:07:39.412662   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413091   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413112   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:39.413047   87142 retry.go:31] will retry after 878.218722ms: waiting for machine to come up
	I1104 12:07:36.407939   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:36.411021   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411378   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:36.411408   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411599   85759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:36.415528   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:36.427484   85759 kubeadm.go:883] updating cluster {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:36.427616   85759 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:36.427684   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:36.460332   85759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:36.460406   85759 ssh_runner.go:195] Run: which lz4
	I1104 12:07:36.464187   85759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:36.468140   85759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:36.468177   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:37.703067   85759 crio.go:462] duration metric: took 1.238901186s to copy over tarball
	I1104 12:07:37.703136   85759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:39.803761   85759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100578378s)
	I1104 12:07:39.803795   85759 crio.go:469] duration metric: took 2.100697698s to extract the tarball
	I1104 12:07:39.803805   85759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:39.840536   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:39.883410   85759 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:39.883431   85759 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:39.883438   85759 kubeadm.go:934] updating node { 192.168.39.47 8443 v1.31.2 crio true true} ...
	I1104 12:07:39.883531   85759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-325116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:39.883608   85759 ssh_runner.go:195] Run: crio config
	I1104 12:07:39.928280   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:39.928303   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:39.928313   85759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:39.928333   85759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-325116 NodeName:embed-certs-325116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:39.928440   85759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-325116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:39.928495   85759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:39.938496   85759 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:39.938568   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:39.947809   85759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1104 12:07:39.963319   85759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:39.978789   85759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1104 12:07:39.994910   85759 ssh_runner.go:195] Run: grep 192.168.39.47	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:39.998355   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:40.010097   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:40.118679   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:40.134369   85759 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116 for IP: 192.168.39.47
	I1104 12:07:40.134391   85759 certs.go:194] generating shared ca certs ...
	I1104 12:07:40.134429   85759 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:40.134612   85759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:40.134666   85759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:40.134680   85759 certs.go:256] generating profile certs ...
	I1104 12:07:40.134782   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/client.key
	I1104 12:07:40.134880   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key.36f6fb66
	I1104 12:07:40.134929   85759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key
	I1104 12:07:40.135083   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:40.135124   85759 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:40.135140   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:40.135225   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:40.135281   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:40.135315   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:40.135380   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:40.136240   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:40.179608   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:40.227851   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:40.255791   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:40.281672   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1104 12:07:40.305960   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:07:40.332465   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:40.354950   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:07:40.377476   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:40.399291   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:40.420689   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:40.443610   85759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:40.459706   85759 ssh_runner.go:195] Run: openssl version
	I1104 12:07:40.465244   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:40.475375   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479676   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479748   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.485523   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:40.497163   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:40.509090   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513617   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513685   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.519372   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:40.530944   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:40.542569   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.546965   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.547019   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.552470   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:40.562456   85759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:40.566967   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:40.572778   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:40.578409   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:40.584134   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:40.589880   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:40.595604   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:40.601191   85759 kubeadm.go:392] StartCluster: {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:40.601329   85759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:40.601385   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.642970   85759 cri.go:89] found id: ""
	I1104 12:07:40.643034   85759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:40.653420   85759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:40.653446   85759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:40.653496   85759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:40.663023   85759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:40.664008   85759 kubeconfig.go:125] found "embed-certs-325116" server: "https://192.168.39.47:8443"
	I1104 12:07:40.665967   85759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:40.675296   85759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.47
	I1104 12:07:40.675324   85759 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:40.675336   85759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:40.675384   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.718457   85759 cri.go:89] found id: ""
	I1104 12:07:40.718543   85759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:40.733875   85759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:40.743811   85759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:40.743835   85759 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:40.743889   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:07:40.752987   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:40.753048   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:40.762296   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:07:40.771048   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:40.771112   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:40.780163   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.789500   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:40.789566   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.799200   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:07:40.808061   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:40.808121   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:40.817445   85759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:40.826803   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.934345   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.292591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293050   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293084   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:40.292988   87142 retry.go:31] will retry after 1.110341741s: waiting for machine to come up
	I1104 12:07:41.405407   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405858   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405885   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:41.405800   87142 retry.go:31] will retry after 1.311587036s: waiting for machine to come up
	I1104 12:07:42.719157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:42.719530   87142 retry.go:31] will retry after 1.999866716s: waiting for machine to come up
	I1104 12:07:44.721872   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722324   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722351   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:44.722278   87142 retry.go:31] will retry after 2.895241769s: waiting for machine to come up
	I1104 12:07:41.512710   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.729355   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.807064   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.888493   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:07:41.888593   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.389674   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.889373   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.389705   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.889548   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.924248   85759 api_server.go:72] duration metric: took 2.035753888s to wait for apiserver process to appear ...
	I1104 12:07:43.924277   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:07:43.924320   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:43.924831   85759 api_server.go:269] stopped: https://192.168.39.47:8443/healthz: Get "https://192.168.39.47:8443/healthz": dial tcp 192.168.39.47:8443: connect: connection refused
	I1104 12:07:44.424651   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.043002   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.043037   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.043054   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.104246   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.104276   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.424506   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.430506   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.430544   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:47.924409   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.937055   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.937083   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:48.424568   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:48.428850   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:07:48.436388   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:07:48.436411   85759 api_server.go:131] duration metric: took 4.512127349s to wait for apiserver health ...
	I1104 12:07:48.436420   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:48.436427   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:48.438220   85759 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:07:48.439495   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:07:48.449650   85759 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:07:48.467313   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:07:48.480777   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:07:48.480823   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:07:48.480834   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:07:48.480845   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:07:48.480859   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:07:48.480876   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:07:48.480893   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:07:48.480907   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:07:48.480916   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:07:48.480928   85759 system_pods.go:74] duration metric: took 13.592864ms to wait for pod list to return data ...
	I1104 12:07:48.480947   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:07:48.487234   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:07:48.487271   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:07:48.487284   85759 node_conditions.go:105] duration metric: took 6.331259ms to run NodePressure ...
	I1104 12:07:48.487313   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:48.756654   85759 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764840   85759 kubeadm.go:739] kubelet initialised
	I1104 12:07:48.764863   85759 kubeadm.go:740] duration metric: took 8.175857ms waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764871   85759 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:48.772653   85759 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.784158   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784198   85759 pod_ready.go:82] duration metric: took 11.515605ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.784211   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784220   85759 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.791264   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791297   85759 pod_ready.go:82] duration metric: took 7.066247ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.791310   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791326   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.798259   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798294   85759 pod_ready.go:82] duration metric: took 6.954559ms for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.798304   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798312   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.872019   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872058   85759 pod_ready.go:82] duration metric: took 73.723761ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.872069   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872075   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.271210   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271252   85759 pod_ready.go:82] duration metric: took 399.167509ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.271264   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271272   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.671430   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671453   85759 pod_ready.go:82] duration metric: took 400.174495ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.671469   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671475   85759 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:50.070546   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070576   85759 pod_ready.go:82] duration metric: took 399.092108ms for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:50.070587   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070596   85759 pod_ready.go:39] duration metric: took 1.305717298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:50.070615   85759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:07:50.082815   85759 ops.go:34] apiserver oom_adj: -16
	I1104 12:07:50.082839   85759 kubeadm.go:597] duration metric: took 9.429385589s to restartPrimaryControlPlane
	I1104 12:07:50.082850   85759 kubeadm.go:394] duration metric: took 9.481667011s to StartCluster
	I1104 12:07:50.082871   85759 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.082952   85759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:07:50.086014   85759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.086562   85759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:07:50.086628   85759 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:07:50.086740   85759 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-325116"
	I1104 12:07:50.086763   85759 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-325116"
	I1104 12:07:50.086765   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1104 12:07:50.086776   85759 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:07:50.086774   85759 addons.go:69] Setting default-storageclass=true in profile "embed-certs-325116"
	I1104 12:07:50.086803   85759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-325116"
	I1104 12:07:50.086787   85759 addons.go:69] Setting metrics-server=true in profile "embed-certs-325116"
	I1104 12:07:50.086812   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.086825   85759 addons.go:234] Setting addon metrics-server=true in "embed-certs-325116"
	W1104 12:07:50.086837   85759 addons.go:243] addon metrics-server should already be in state true
	I1104 12:07:50.086866   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.087120   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087148   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087160   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087178   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087247   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087286   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.088320   85759 out.go:177] * Verifying Kubernetes components...
	I1104 12:07:50.089814   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:50.102796   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I1104 12:07:50.102976   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1104 12:07:50.103076   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1104 12:07:50.103462   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103491   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103566   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103990   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104014   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104085   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104101   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104199   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104223   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104368   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104402   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104545   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.104559   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104949   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.104987   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.105081   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.105116   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.108134   85759 addons.go:234] Setting addon default-storageclass=true in "embed-certs-325116"
	W1104 12:07:50.108161   85759 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:07:50.108193   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.108597   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.108648   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.121556   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I1104 12:07:50.122038   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.122504   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.122527   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.122869   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.123107   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.125142   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.125294   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I1104 12:07:50.125613   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.125972   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.125988   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.126279   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.126399   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.127256   85759 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:07:50.127993   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1104 12:07:50.128235   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.128597   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.128843   85759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.128864   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:07:50.128883   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.129066   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.129088   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.129389   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.129882   85759 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:07:47.619516   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620045   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:47.620000   87142 retry.go:31] will retry after 3.554669963s: waiting for machine to come up
	I1104 12:07:50.130127   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.130187   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.131115   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:07:50.131134   85759 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:07:50.131154   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.131899   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132352   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.132375   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132664   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.132830   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.132986   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.133099   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.134698   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135217   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.135246   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.135629   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.135765   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.135908   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.146618   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1104 12:07:50.147639   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.148281   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.148307   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.148617   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.148860   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.150751   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.151010   85759 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.151028   85759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:07:50.151050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.153947   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154385   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.154418   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154560   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.154749   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.154886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.155028   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.278380   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:50.294682   85759 node_ready.go:35] waiting up to 6m0s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:50.355769   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:07:50.355790   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:07:50.375818   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.404741   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:07:50.404766   85759 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:07:50.466718   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.466748   85759 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:07:50.493662   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.503255   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.799735   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.799772   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800039   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800086   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.800094   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:50.800107   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.800159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800382   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800394   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.810586   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.810857   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.810876   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810893   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.484326   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484356   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484671   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484687   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484695   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484702   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484899   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484938   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484950   85759 addons.go:475] Verifying addon metrics-server=true in "embed-certs-325116"
	I1104 12:07:51.549507   85759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.046214827s)
	I1104 12:07:51.549559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549569   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.549886   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.549906   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.549909   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.549916   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549923   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.550143   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.550164   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.552039   85759 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1104 12:07:52.573915   86402 start.go:364] duration metric: took 3m30.781955626s to acquireMachinesLock for "old-k8s-version-589257"
	I1104 12:07:52.573984   86402 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:52.573996   86402 fix.go:54] fixHost starting: 
	I1104 12:07:52.574443   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:52.574500   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:52.594310   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1104 12:07:52.594822   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:52.595317   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:07:52.595347   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:52.595727   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:52.595924   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:07:52.596093   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 12:07:52.597578   86402 fix.go:112] recreateIfNeeded on old-k8s-version-589257: state=Stopped err=<nil>
	I1104 12:07:52.597615   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	W1104 12:07:52.597752   86402 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:52.599659   86402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	I1104 12:07:51.176791   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177282   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Found IP for machine: 192.168.72.130
	I1104 12:07:51.177313   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has current primary IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177325   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserving static IP address...
	I1104 12:07:51.177817   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.177863   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | skip adding static IP to network mk-default-k8s-diff-port-036892 - found existing host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"}
	I1104 12:07:51.177876   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserved static IP address: 192.168.72.130
	I1104 12:07:51.177890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for SSH to be available...
	I1104 12:07:51.177897   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Getting to WaitForSSH function...
	I1104 12:07:51.180038   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180440   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.180466   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH client type: external
	I1104 12:07:51.180611   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa (-rw-------)
	I1104 12:07:51.180747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:51.180777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | About to run SSH command:
	I1104 12:07:51.180795   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | exit 0
	I1104 12:07:51.309075   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:51.309445   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetConfigRaw
	I1104 12:07:51.310162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.312651   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313061   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.313090   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313460   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:07:51.313720   86301 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:51.313747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:51.313926   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.316269   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316782   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.316829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316937   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.317162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317335   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317598   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.317777   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.317981   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.317994   86301 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:51.441588   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:51.441626   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.441876   86301 buildroot.go:166] provisioning hostname "default-k8s-diff-port-036892"
	I1104 12:07:51.441902   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.442097   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.445155   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445637   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.445670   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445820   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.446013   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446186   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446352   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.446539   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.446753   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.446773   86301 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-036892 && echo "default-k8s-diff-port-036892" | sudo tee /etc/hostname
	I1104 12:07:51.578973   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-036892
	
	I1104 12:07:51.579004   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.581759   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.582135   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582299   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.582455   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582582   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.582834   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.583006   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.583022   86301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-036892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-036892/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-036892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:51.702410   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:51.702441   86301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:51.702471   86301 buildroot.go:174] setting up certificates
	I1104 12:07:51.702483   86301 provision.go:84] configureAuth start
	I1104 12:07:51.702492   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.702789   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.705067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.705449   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705567   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.707341   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707627   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.707658   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707748   86301 provision.go:143] copyHostCerts
	I1104 12:07:51.707805   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:51.707818   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:51.707870   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:51.707969   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:51.707978   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:51.707999   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:51.708061   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:51.708067   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:51.708085   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:51.708132   86301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-036892 san=[127.0.0.1 192.168.72.130 default-k8s-diff-port-036892 localhost minikube]
	I1104 12:07:51.935898   86301 provision.go:177] copyRemoteCerts
	I1104 12:07:51.935973   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:51.936008   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.938722   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939100   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.939134   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939266   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.939462   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.939609   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.939786   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.027147   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:52.054828   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1104 12:07:52.078755   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 12:07:52.101312   86301 provision.go:87] duration metric: took 398.817409ms to configureAuth
	I1104 12:07:52.101338   86301 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:52.101523   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:52.101608   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.104187   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104549   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.104581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104700   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.104890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105028   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.105319   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.105490   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.105514   86301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:52.331840   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:52.331865   86301 machine.go:96] duration metric: took 1.018128337s to provisionDockerMachine
	I1104 12:07:52.331875   86301 start.go:293] postStartSetup for "default-k8s-diff-port-036892" (driver="kvm2")
	I1104 12:07:52.331884   86301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:52.331898   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.332229   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:52.332261   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.334710   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335005   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.335036   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335176   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.335342   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.335447   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.335547   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.419392   86301 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:52.423306   86301 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:52.423335   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:52.423396   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:52.423483   86301 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:52.423575   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:52.432625   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:52.456616   86301 start.go:296] duration metric: took 124.726284ms for postStartSetup
	I1104 12:07:52.456664   86301 fix.go:56] duration metric: took 17.406645021s for fixHost
	I1104 12:07:52.456689   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.459189   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.459573   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.459967   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460093   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460218   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.460349   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.460521   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.460533   86301 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:52.573760   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722072.546172571
	
	I1104 12:07:52.573781   86301 fix.go:216] guest clock: 1730722072.546172571
	I1104 12:07:52.573787   86301 fix.go:229] Guest: 2024-11-04 12:07:52.546172571 +0000 UTC Remote: 2024-11-04 12:07:52.45666981 +0000 UTC m=+212.207079326 (delta=89.502761ms)
	I1104 12:07:52.573827   86301 fix.go:200] guest clock delta is within tolerance: 89.502761ms
	I1104 12:07:52.573832   86301 start.go:83] releasing machines lock for "default-k8s-diff-port-036892", held for 17.523849814s
	I1104 12:07:52.573856   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.574109   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:52.576773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577125   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.577151   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577304   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577776   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577950   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.578043   86301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:52.578079   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.578133   86301 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:52.578159   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.580773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.580909   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581154   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581179   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581196   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581286   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581488   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581660   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581677   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581770   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.581823   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581946   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.683801   86301 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:52.689498   86301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:52.830236   86301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:52.835868   86301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:52.835951   86301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:52.851557   86301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:52.851586   86301 start.go:495] detecting cgroup driver to use...
	I1104 12:07:52.851656   86301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:52.868648   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:52.883434   86301 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:52.883507   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:52.898233   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:52.912615   86301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:53.036342   86301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:53.183326   86301 docker.go:233] disabling docker service ...
	I1104 12:07:53.183407   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:53.197465   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:53.210118   86301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:53.354857   86301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:53.490760   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:53.506829   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:53.526401   86301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:53.526464   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.537264   86301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:53.537339   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.547882   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.558039   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.569347   86301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:53.579931   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.589594   86301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.606753   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.623316   86301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:53.638183   86301 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:53.638245   86301 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:53.656452   86301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:53.666343   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:53.784882   86301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:53.879727   86301 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:53.879790   86301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:53.884438   86301 start.go:563] Will wait 60s for crictl version
	I1104 12:07:53.884494   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:07:53.887785   86301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:53.926395   86301 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:53.926496   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.963049   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.996513   86301 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:53.997774   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:54.000829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001214   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:54.001300   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001469   86301 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:54.005521   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:54.021723   86301 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:54.021915   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:54.021979   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:54.072114   86301 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:54.072178   86301 ssh_runner.go:195] Run: which lz4
	I1104 12:07:54.077106   86301 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:54.081979   86301 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:54.082018   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:51.553141   85759 addons.go:510] duration metric: took 1.466523338s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1104 12:07:52.298494   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:54.299895   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:52.600997   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .Start
	I1104 12:07:52.601180   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 12:07:52.602131   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 12:07:52.602560   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 12:07:52.603030   86402 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 12:07:52.603859   86402 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 12:07:53.855214   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 12:07:53.856063   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:53.856539   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:53.856594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:53.856513   87367 retry.go:31] will retry after 268.725451ms: waiting for machine to come up
	I1104 12:07:54.127094   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.127584   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.127612   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.127560   87367 retry.go:31] will retry after 239.665225ms: waiting for machine to come up
	I1104 12:07:54.369139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.369777   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.369798   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.369710   87367 retry.go:31] will retry after 386.228261ms: waiting for machine to come up
	I1104 12:07:54.757191   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.757637   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.757665   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.757591   87367 retry.go:31] will retry after 571.244573ms: waiting for machine to come up
	I1104 12:07:55.330439   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.331187   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.331216   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.331144   87367 retry.go:31] will retry after 539.328185ms: waiting for machine to come up
	I1104 12:07:55.871869   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.872373   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.872403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.872335   87367 retry.go:31] will retry after 879.285089ms: waiting for machine to come up
	I1104 12:07:55.376802   86301 crio.go:462] duration metric: took 1.299729399s to copy over tarball
	I1104 12:07:55.376881   86301 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:57.716230   86301 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.339307666s)
	I1104 12:07:57.716268   86301 crio.go:469] duration metric: took 2.339436958s to extract the tarball
	I1104 12:07:57.716277   86301 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:57.753216   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:57.799042   86301 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:57.799145   86301 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:57.799161   86301 kubeadm.go:934] updating node { 192.168.72.130 8444 v1.31.2 crio true true} ...
	I1104 12:07:57.799273   86301 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-036892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:57.799347   86301 ssh_runner.go:195] Run: crio config
	I1104 12:07:57.851871   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:07:57.851892   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:57.851900   86301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:57.851919   86301 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-036892 NodeName:default-k8s-diff-port-036892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:57.852056   86301 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-036892"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:57.852116   86301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:57.862269   86301 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:57.862343   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:57.872253   86301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1104 12:07:57.889328   86301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:57.908250   86301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1104 12:07:57.926081   86301 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:57.929870   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:57.943872   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:58.070141   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:58.089370   86301 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892 for IP: 192.168.72.130
	I1104 12:07:58.089397   86301 certs.go:194] generating shared ca certs ...
	I1104 12:07:58.089423   86301 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:58.089596   86301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:58.089647   86301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:58.089659   86301 certs.go:256] generating profile certs ...
	I1104 12:07:58.089765   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/client.key
	I1104 12:07:58.089831   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key.713851b2
	I1104 12:07:58.089889   86301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key
	I1104 12:07:58.090054   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:58.090100   86301 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:58.090116   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:58.090149   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:58.090184   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:58.090219   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:58.090279   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:58.090977   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:58.125282   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:58.168289   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:58.210967   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:58.253986   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 12:07:58.280769   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:07:58.308406   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:58.334250   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:07:58.363224   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:58.391795   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:58.420782   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:58.446611   86301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:58.465895   86301 ssh_runner.go:195] Run: openssl version
	I1104 12:07:58.471614   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:58.482139   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486533   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486591   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.492217   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:58.502724   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:58.514146   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518243   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518303   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.523579   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:58.533993   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:58.544137   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548190   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548250   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.553714   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:58.564221   86301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:58.568445   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:58.574072   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:58.579551   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:58.584909   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:58.590102   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:58.595227   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:58.600338   86301 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:58.600445   86301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:58.600492   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.634282   86301 cri.go:89] found id: ""
	I1104 12:07:58.634352   86301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:58.644578   86301 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:58.644597   86301 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:58.644635   86301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:58.654412   86301 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:58.655638   86301 kubeconfig.go:125] found "default-k8s-diff-port-036892" server: "https://192.168.72.130:8444"
	I1104 12:07:58.658639   86301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:58.667867   86301 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I1104 12:07:58.667900   86301 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:58.667913   86301 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:58.667971   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.702765   86301 cri.go:89] found id: ""
	I1104 12:07:58.702844   86301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:58.718368   86301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:58.727671   86301 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:58.727690   86301 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:58.727750   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1104 12:07:58.736350   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:58.736424   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:58.745441   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1104 12:07:58.753945   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:58.754011   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:58.763134   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.771588   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:58.771651   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.780623   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1104 12:07:58.788962   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:58.789036   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:58.798472   86301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:58.808209   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:58.919153   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.679355   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.889628   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.958981   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:00.048061   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:00.048158   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:56.798747   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:57.799286   85759 node_ready.go:49] node "embed-certs-325116" has status "Ready":"True"
	I1104 12:07:57.799308   85759 node_ready.go:38] duration metric: took 7.504592975s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:57.799319   85759 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:57.805595   85759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812394   85759 pod_ready.go:93] pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.812421   85759 pod_ready.go:82] duration metric: took 6.791823ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812434   85759 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818338   85759 pod_ready.go:93] pod "etcd-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.818359   85759 pod_ready.go:82] duration metric: took 5.916571ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818400   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:00.015222   85759 pod_ready.go:103] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"False"
	I1104 12:07:56.752983   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:56.753577   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:56.753613   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:56.753542   87367 retry.go:31] will retry after 1.081359862s: waiting for machine to come up
	I1104 12:07:57.836518   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:57.836963   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:57.836990   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:57.836914   87367 retry.go:31] will retry after 1.149571097s: waiting for machine to come up
	I1104 12:07:58.987694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:58.988125   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:58.988152   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:58.988077   87367 retry.go:31] will retry after 1.247311806s: waiting for machine to come up
	I1104 12:08:00.237634   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:00.238147   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:00.238217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:00.238109   87367 retry.go:31] will retry after 2.058125339s: waiting for machine to come up
	I1104 12:08:00.549003   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.048325   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.548502   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.563976   86301 api_server.go:72] duration metric: took 1.515915725s to wait for apiserver process to appear ...
	I1104 12:08:01.564003   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:01.564021   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.008662   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.008689   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.008701   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.033053   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.033085   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.064261   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.084034   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.084062   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:04.564564   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.570062   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.570090   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.064688   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.069572   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:05.069600   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.564628   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.570537   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:08:05.577335   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:05.577360   86301 api_server.go:131] duration metric: took 4.01335048s to wait for apiserver health ...
	I1104 12:08:05.577371   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:08:05.577379   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:05.578990   86301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:01.824677   85759 pod_ready.go:93] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.824703   85759 pod_ready.go:82] duration metric: took 4.006292816s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.824717   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833386   85759 pod_ready.go:93] pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.833415   85759 pod_ready.go:82] duration metric: took 8.688522ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833428   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839346   85759 pod_ready.go:93] pod "kube-proxy-phzgx" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.839370   85759 pod_ready.go:82] duration metric: took 5.933971ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839379   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844449   85759 pod_ready.go:93] pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.844476   85759 pod_ready.go:82] duration metric: took 5.08969ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844490   85759 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:03.852871   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:02.298631   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:02.299046   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:02.299079   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:02.298978   87367 retry.go:31] will retry after 2.664667046s: waiting for machine to come up
	I1104 12:08:04.964700   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:04.965185   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:04.965209   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:04.965135   87367 retry.go:31] will retry after 2.716802395s: waiting for machine to come up
	I1104 12:08:05.580188   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:05.591930   86301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:05.609969   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:05.621524   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:05.621559   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:05.621579   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:05.621590   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:05.621599   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:05.621609   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:05.621623   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:05.621637   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:05.621646   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:05.621656   86301 system_pods.go:74] duration metric: took 11.668493ms to wait for pod list to return data ...
	I1104 12:08:05.621669   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:05.626555   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:05.626583   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:05.626600   86301 node_conditions.go:105] duration metric: took 4.924748ms to run NodePressure ...
	I1104 12:08:05.626620   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:05.899159   86301 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905004   86301 kubeadm.go:739] kubelet initialised
	I1104 12:08:05.905027   86301 kubeadm.go:740] duration metric: took 5.831926ms waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905035   86301 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:05.910301   86301 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.917517   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917552   86301 pod_ready.go:82] duration metric: took 7.223252ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.917564   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917577   86301 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.924077   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924108   86301 pod_ready.go:82] duration metric: took 6.519268ms for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.924123   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924133   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.929584   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929611   86301 pod_ready.go:82] duration metric: took 5.464108ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.929625   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929640   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.013629   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013655   86301 pod_ready.go:82] duration metric: took 84.003349ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.013666   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013674   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.413337   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413362   86301 pod_ready.go:82] duration metric: took 399.676932ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.413372   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413379   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.813910   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813948   86301 pod_ready.go:82] duration metric: took 400.558541ms for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.813962   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813971   86301 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:07.213603   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213632   86301 pod_ready.go:82] duration metric: took 399.645898ms for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:07.213642   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213650   86301 pod_ready.go:39] duration metric: took 1.308606058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:07.213664   86301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:07.224946   86301 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:07.224972   86301 kubeadm.go:597] duration metric: took 8.580368331s to restartPrimaryControlPlane
	I1104 12:08:07.224984   86301 kubeadm.go:394] duration metric: took 8.624649305s to StartCluster
	I1104 12:08:07.225005   86301 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.225093   86301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:07.226601   86301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.226848   86301 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:07.226980   86301 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:07.227075   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:07.227096   86301 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227115   86301 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:07.227110   86301 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-036892"
	W1104 12:08:07.227128   86301 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:07.227145   86301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-036892"
	I1104 12:08:07.227161   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227082   86301 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227275   86301 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.227291   86301 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:07.227316   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227494   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227529   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227592   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227620   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227634   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227655   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.228583   86301 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:07.229927   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:07.242580   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I1104 12:08:07.243096   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.243659   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.243678   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.243954   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I1104 12:08:07.244058   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.244513   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.244634   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.244679   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245015   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.245035   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.245437   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.245905   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.245942   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245963   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I1104 12:08:07.246281   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.246725   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.246748   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.247084   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.247294   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.250833   86301 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.250857   86301 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:07.250884   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.251243   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.251285   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.261670   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1104 12:08:07.261736   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I1104 12:08:07.262154   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262283   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262803   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262821   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.262916   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262927   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.263218   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263282   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263411   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.263457   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.265067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.265574   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.267307   86301 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:07.267336   86301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:07.268853   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:07.268874   86301 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:07.268895   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.268976   86301 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.268994   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:07.269011   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.271584   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I1104 12:08:07.272047   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.272347   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272377   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272688   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.272707   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.272933   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.272959   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272990   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.273007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.273065   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.273149   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273564   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.273597   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.273765   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273767   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273925   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273966   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274049   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274098   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.274179   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.288474   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I1104 12:08:07.288955   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.289555   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.289580   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.289915   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.290128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.291744   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.291944   86301 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.291958   86301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:07.291972   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.294477   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.294793   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.294824   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.295009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.295178   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.295326   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.295444   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.430295   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:07.461396   86301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:07.523117   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.542339   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:07.542361   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:07.566207   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:07.566232   86301 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:07.580871   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.596309   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:07.596338   86301 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:07.626662   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:08.553268   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553295   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553315   86301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030165078s)
	I1104 12:08:08.553352   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553373   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553656   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553673   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553683   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553739   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553759   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553767   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553780   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553925   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553942   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.554106   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.554138   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.554155   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.559615   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.559635   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.559944   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.559961   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.563833   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.563848   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564636   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564653   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564666   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.564671   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564894   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564906   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564912   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564940   86301 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:08.566838   86301 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:08.568165   86301 addons.go:510] duration metric: took 1.341200959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:09.465405   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.350759   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:08.850563   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:10.851315   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:07.683582   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:07.684143   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:07.684172   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:07.684093   87367 retry.go:31] will retry after 2.880856513s: waiting for machine to come up
	I1104 12:08:10.566197   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.566657   86402 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 12:08:10.566675   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 12:08:10.566687   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.567139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.567166   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 12:08:10.567186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | skip adding static IP to network mk-old-k8s-version-589257 - found existing host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"}
	I1104 12:08:10.567199   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 12:08:10.567213   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 12:08:10.569500   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569816   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.569846   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 12:08:10.570004   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 12:08:10.570025   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:10.570033   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 12:08:10.570041   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 12:08:10.697114   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:10.697552   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 12:08:10.698196   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:10.700982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701369   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.701403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701649   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:08:10.701875   86402 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:10.701898   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:10.702099   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.704605   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.704977   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.705006   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.705151   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.705342   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705486   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705602   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.705703   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.705907   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.705918   86402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:10.813494   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:10.813544   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.813816   86402 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 12:08:10.813847   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.814034   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.816782   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.817245   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817394   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.817589   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817760   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817882   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.818027   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.818227   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.818245   86402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 12:08:10.940779   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 12:08:10.940803   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.943694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944062   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.944090   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944263   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.944452   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944627   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944767   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.944910   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.945093   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.945110   86402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:11.061924   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:11.061966   86402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:11.062007   86402 buildroot.go:174] setting up certificates
	I1104 12:08:11.062021   86402 provision.go:84] configureAuth start
	I1104 12:08:11.062033   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:11.062293   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.065165   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065559   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.065594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065834   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.068257   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068620   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.068646   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068787   86402 provision.go:143] copyHostCerts
	I1104 12:08:11.068842   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:11.068854   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:11.068904   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:11.068993   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:11.069000   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:11.069019   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:11.069072   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:11.069079   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:11.069097   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:11.069191   86402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 12:08:11.271880   86402 provision.go:177] copyRemoteCerts
	I1104 12:08:11.271946   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:11.271988   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.275023   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275396   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.275428   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275701   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.275905   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.276048   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.276182   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.362968   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:11.388401   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 12:08:11.417180   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:11.439810   86402 provision.go:87] duration metric: took 377.778325ms to configureAuth
	I1104 12:08:11.439841   86402 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:11.440043   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:08:11.440110   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.442476   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.442783   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.442818   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.443005   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.443204   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443329   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.443665   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.443822   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.443837   86402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:11.662212   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:11.662241   86402 machine.go:96] duration metric: took 960.351823ms to provisionDockerMachine
	I1104 12:08:11.662256   86402 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 12:08:11.662269   86402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:11.662289   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.662613   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:11.662642   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.665028   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665391   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.665420   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665598   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.665776   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.665942   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.666064   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.889727   85500 start.go:364] duration metric: took 49.147423989s to acquireMachinesLock for "no-preload-908370"
	I1104 12:08:11.889796   85500 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:08:11.889806   85500 fix.go:54] fixHost starting: 
	I1104 12:08:11.890201   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:11.890229   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:11.906978   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I1104 12:08:11.907524   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:11.907916   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:11.907939   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:11.908319   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:11.908518   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:11.908672   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:11.910182   85500 fix.go:112] recreateIfNeeded on no-preload-908370: state=Stopped err=<nil>
	I1104 12:08:11.910224   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	W1104 12:08:11.910353   85500 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:08:11.912457   85500 out.go:177] * Restarting existing kvm2 VM for "no-preload-908370" ...
	I1104 12:08:11.747199   86402 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:11.751253   86402 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:11.751279   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:11.751356   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:11.751465   86402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:11.751591   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:11.760409   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:11.781890   86402 start.go:296] duration metric: took 119.620604ms for postStartSetup
	I1104 12:08:11.781934   86402 fix.go:56] duration metric: took 19.207938878s for fixHost
	I1104 12:08:11.781960   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.784767   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785058   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.785084   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785300   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.785500   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785644   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785750   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.785877   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.786047   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.786059   86402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:11.889540   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722091.863405264
	
	I1104 12:08:11.889568   86402 fix.go:216] guest clock: 1730722091.863405264
	I1104 12:08:11.889578   86402 fix.go:229] Guest: 2024-11-04 12:08:11.863405264 +0000 UTC Remote: 2024-11-04 12:08:11.781939603 +0000 UTC m=+230.132769870 (delta=81.465661ms)
	I1104 12:08:11.889631   86402 fix.go:200] guest clock delta is within tolerance: 81.465661ms
	I1104 12:08:11.889641   86402 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 19.315682928s
	I1104 12:08:11.889677   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.889975   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.892654   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.892982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.893012   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.893212   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893706   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893888   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893989   86402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:11.894031   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.894074   86402 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:11.894094   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.896812   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897020   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897192   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897454   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897478   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897631   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897646   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897778   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897911   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.897989   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.898083   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.898120   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.998704   86402 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:12.004820   86402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:12.148742   86402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:12.155015   86402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:12.155089   86402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:12.171054   86402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:12.171085   86402 start.go:495] detecting cgroup driver to use...
	I1104 12:08:12.171154   86402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:12.189977   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:12.204622   86402 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:12.204679   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:12.218808   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:12.232276   86402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:12.341220   86402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:12.512813   86402 docker.go:233] disabling docker service ...
	I1104 12:08:12.512893   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:12.526784   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:12.539774   86402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:12.666162   86402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:12.788317   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:12.802703   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:12.820915   86402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 12:08:12.820985   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.831311   86402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:12.831400   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.841625   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.852548   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.864683   86402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:12.876794   86402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:12.886878   86402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:12.886943   86402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:12.902476   86402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:12.914565   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:13.044125   86402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:13.149816   86402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:13.149893   86402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:13.154639   86402 start.go:563] Will wait 60s for crictl version
	I1104 12:08:13.154706   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:13.158788   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:13.200038   86402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:13.200117   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.233501   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.264558   86402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 12:08:11.913730   85500 main.go:141] libmachine: (no-preload-908370) Calling .Start
	I1104 12:08:11.913915   85500 main.go:141] libmachine: (no-preload-908370) Ensuring networks are active...
	I1104 12:08:11.914653   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network default is active
	I1104 12:08:11.915111   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network mk-no-preload-908370 is active
	I1104 12:08:11.915575   85500 main.go:141] libmachine: (no-preload-908370) Getting domain xml...
	I1104 12:08:11.916375   85500 main.go:141] libmachine: (no-preload-908370) Creating domain...
	I1104 12:08:13.289793   85500 main.go:141] libmachine: (no-preload-908370) Waiting to get IP...
	I1104 12:08:13.290880   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.291498   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.291631   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.291463   87562 retry.go:31] will retry after 277.090671ms: waiting for machine to come up
	I1104 12:08:13.570141   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.570726   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.570749   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.570623   87562 retry.go:31] will retry after 259.985785ms: waiting for machine to come up
	I1104 12:08:13.832172   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.832855   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.832898   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.832809   87562 retry.go:31] will retry after 473.426945ms: waiting for machine to come up
	I1104 12:08:14.308725   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.309273   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.309302   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.309249   87562 retry.go:31] will retry after 417.466134ms: waiting for machine to come up
	I1104 12:08:14.727927   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.728388   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.728413   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.728366   87562 retry.go:31] will retry after 734.894622ms: waiting for machine to come up
	I1104 12:08:11.465894   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:13.966921   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:14.465523   86301 node_ready.go:49] node "default-k8s-diff-port-036892" has status "Ready":"True"
	I1104 12:08:14.465545   86301 node_ready.go:38] duration metric: took 7.004111382s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:14.465554   86301 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:14.473334   86301 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482486   86301 pod_ready.go:93] pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:14.482508   86301 pod_ready.go:82] duration metric: took 9.145998ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482518   86301 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:13.351753   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:15.851818   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:13.266087   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:13.269660   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270200   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:13.270233   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270520   86402 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:13.274751   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:13.290348   86402 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:13.290483   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:08:13.290547   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:13.340338   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:13.340426   86402 ssh_runner.go:195] Run: which lz4
	I1104 12:08:13.345147   86402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:08:13.349792   86402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:08:13.349872   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 12:08:14.842720   86402 crio.go:462] duration metric: took 1.497615031s to copy over tarball
	I1104 12:08:14.842791   86402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:08:15.464914   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:15.465510   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:15.465541   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:15.465478   87562 retry.go:31] will retry after 578.01955ms: waiting for machine to come up
	I1104 12:08:16.044861   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:16.045354   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:16.045380   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:16.045313   87562 retry.go:31] will retry after 1.136035438s: waiting for machine to come up
	I1104 12:08:17.182829   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:17.183255   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:17.183282   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:17.183233   87562 retry.go:31] will retry after 1.070971462s: waiting for machine to come up
	I1104 12:08:18.255532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:18.256051   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:18.256078   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:18.256007   87562 retry.go:31] will retry after 1.542250267s: waiting for machine to come up
	I1104 12:08:19.800851   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:19.801298   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:19.801324   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:19.801276   87562 retry.go:31] will retry after 2.127250885s: waiting for machine to come up
	I1104 12:08:16.489394   86301 pod_ready.go:103] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:16.994480   86301 pod_ready.go:93] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:16.994502   86301 pod_ready.go:82] duration metric: took 2.511977586s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:16.994512   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502472   86301 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.502499   86301 pod_ready.go:82] duration metric: took 507.979218ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502513   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507763   86301 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.507785   86301 pod_ready.go:82] duration metric: took 5.264185ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507795   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514017   86301 pod_ready.go:93] pod "kube-proxy-j2srm" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.514045   86301 pod_ready.go:82] duration metric: took 6.241799ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514058   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:19.683083   86301 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.049735   86301 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:20.049759   86301 pod_ready.go:82] duration metric: took 2.535691306s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:20.049772   86301 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:18.749494   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.853448   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:17.837381   86402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994557811s)
	I1104 12:08:17.837410   86402 crio.go:469] duration metric: took 2.994665886s to extract the tarball
	I1104 12:08:17.837420   86402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:08:17.882418   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:17.917035   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:17.917064   86402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:17.917195   86402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.917169   86402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.917164   86402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.917150   86402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.917283   86402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.917254   86402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.918943   86402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 12:08:17.919014   86402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.919025   86402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.070119   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.076604   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.078712   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.083777   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.087827   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.092838   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.110359   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 12:08:18.165523   86402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 12:08:18.165569   86402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.165617   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.213723   86402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 12:08:18.213784   86402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.213833   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.252171   86402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 12:08:18.252221   86402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.252270   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256482   86402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 12:08:18.256522   86402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.256567   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256606   86402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 12:08:18.256564   86402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 12:08:18.256631   86402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.256632   86402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.256632   86402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 12:08:18.256690   86402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 12:08:18.256657   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256703   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.256691   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.256738   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256658   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.264837   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.265836   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.349896   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.349935   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.350014   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.350077   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.368533   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.371302   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.371393   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.496042   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.496121   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.509196   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.509339   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.509247   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.509348   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.513943   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.645867   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 12:08:18.649173   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.649276   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.656159   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.656193   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 12:08:18.660309   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 12:08:18.660384   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 12:08:18.719995   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 12:08:18.720033   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 12:08:18.728304   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 12:08:18.867880   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:19.009342   86402 cache_images.go:92] duration metric: took 1.092257593s to LoadCachedImages
	W1104 12:08:19.009448   86402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1104 12:08:19.009469   86402 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 12:08:19.009590   86402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:19.009671   86402 ssh_runner.go:195] Run: crio config
	I1104 12:08:19.054831   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:08:19.054850   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:19.054863   86402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:19.054880   86402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 12:08:19.055049   86402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:19.055125   86402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 12:08:19.065804   86402 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:19.065888   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:19.075491   86402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 12:08:19.092371   86402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:19.108896   86402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 12:08:19.127622   86402 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:19.131597   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:19.145142   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:19.284780   86402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:19.303843   86402 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 12:08:19.303872   86402 certs.go:194] generating shared ca certs ...
	I1104 12:08:19.303894   86402 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.304084   86402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:19.304148   86402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:19.304161   86402 certs.go:256] generating profile certs ...
	I1104 12:08:19.304280   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 12:08:19.304347   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 12:08:19.304401   86402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 12:08:19.304549   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:19.304590   86402 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:19.304608   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:19.304659   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:19.304702   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:19.304729   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:19.304794   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:19.305479   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:19.341333   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:19.375179   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:19.410128   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:19.452565   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 12:08:19.493404   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:08:19.521178   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:19.550524   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:08:19.574903   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:19.599308   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:19.627107   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:19.657121   86402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:19.679087   86402 ssh_runner.go:195] Run: openssl version
	I1104 12:08:19.687115   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:19.702537   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707340   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707408   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.714955   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:19.727883   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:19.739690   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744600   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744656   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.750324   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:19.760988   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:19.772634   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777504   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777580   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.783660   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:19.795483   86402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:19.800327   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:19.806346   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:19.813920   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:19.820358   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:19.826359   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:19.832467   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:19.838902   86402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:19.839018   86402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:19.839075   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.880407   86402 cri.go:89] found id: ""
	I1104 12:08:19.880486   86402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:19.891135   86402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:19.891156   86402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:19.891219   86402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:19.901437   86402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:19.902325   86402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:19.902941   86402 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-589257" cluster setting kubeconfig missing "old-k8s-version-589257" context setting]
	I1104 12:08:19.903879   86402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.937877   86402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:19.948669   86402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.180
	I1104 12:08:19.948701   86402 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:19.948711   86402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:19.948752   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.988249   86402 cri.go:89] found id: ""
	I1104 12:08:19.988344   86402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:20.006949   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:20.020677   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:20.020700   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:20.020747   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:20.031509   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:20.031566   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:20.042229   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:20.054695   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:20.054810   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:20.067410   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.078639   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:20.078711   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.091357   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:20.100986   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:20.101071   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:20.110345   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:20.119778   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:20.281637   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.006838   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.234671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.335720   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.437522   86402 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:21.437615   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:21.929963   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:21.930522   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:21.930552   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:21.930461   87562 retry.go:31] will retry after 2.171964123s: waiting for machine to come up
	I1104 12:08:24.103844   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:24.104303   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:24.104326   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:24.104257   87562 retry.go:31] will retry after 2.838813818s: waiting for machine to come up
	I1104 12:08:22.056858   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:24.057127   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:23.351405   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:25.850834   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:21.938086   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.438198   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.938624   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.438021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.938119   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.438470   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.937687   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.438045   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.937696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.438585   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.944977   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:26.945367   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:26.945395   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:26.945349   87562 retry.go:31] will retry after 2.799785534s: waiting for machine to come up
	I1104 12:08:29.746349   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746747   85500 main.go:141] libmachine: (no-preload-908370) Found IP for machine: 192.168.61.91
	I1104 12:08:29.746774   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has current primary IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746779   85500 main.go:141] libmachine: (no-preload-908370) Reserving static IP address...
	I1104 12:08:29.747195   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.747218   85500 main.go:141] libmachine: (no-preload-908370) Reserved static IP address: 192.168.61.91
	I1104 12:08:29.747234   85500 main.go:141] libmachine: (no-preload-908370) DBG | skip adding static IP to network mk-no-preload-908370 - found existing host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"}
	I1104 12:08:29.747248   85500 main.go:141] libmachine: (no-preload-908370) DBG | Getting to WaitForSSH function...
	I1104 12:08:29.747258   85500 main.go:141] libmachine: (no-preload-908370) Waiting for SSH to be available...
	I1104 12:08:29.749405   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749694   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.749728   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749887   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH client type: external
	I1104 12:08:29.749908   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa (-rw-------)
	I1104 12:08:29.749933   85500 main.go:141] libmachine: (no-preload-908370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:29.749951   85500 main.go:141] libmachine: (no-preload-908370) DBG | About to run SSH command:
	I1104 12:08:29.749966   85500 main.go:141] libmachine: (no-preload-908370) DBG | exit 0
	I1104 12:08:29.873121   85500 main.go:141] libmachine: (no-preload-908370) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:29.873472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetConfigRaw
	I1104 12:08:29.874081   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:29.876737   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877127   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.877155   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877473   85500 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/config.json ...
	I1104 12:08:29.877717   85500 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:29.877740   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:29.877936   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.880272   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880522   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.880543   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.880883   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881048   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.881338   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.881511   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.881524   85500 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:29.989431   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:29.989460   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989725   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:08:29.989757   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989974   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.992679   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993028   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.993057   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993222   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.993425   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993553   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993683   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.993817   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.994000   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.994016   85500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-908370 && echo "no-preload-908370" | sudo tee /etc/hostname
	I1104 12:08:30.118321   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-908370
	
	I1104 12:08:30.118361   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.121095   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121475   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.121509   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121697   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.121866   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122040   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122176   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.122343   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.122525   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.122547   85500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-908370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-908370/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-908370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:26.557368   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:29.056377   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:28.349510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:30.350431   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:26.937831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.938240   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.438463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.937958   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.437676   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.938298   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.937953   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.438075   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.237340   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:30.237370   85500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:30.237413   85500 buildroot.go:174] setting up certificates
	I1104 12:08:30.237429   85500 provision.go:84] configureAuth start
	I1104 12:08:30.237446   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:30.237725   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:30.240026   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240350   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.240380   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.242777   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243101   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.243119   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243302   85500 provision.go:143] copyHostCerts
	I1104 12:08:30.243358   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:30.243368   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:30.243427   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:30.243532   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:30.243542   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:30.243565   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:30.243635   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:30.243643   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:30.243661   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:30.243719   85500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.no-preload-908370 san=[127.0.0.1 192.168.61.91 localhost minikube no-preload-908370]
	I1104 12:08:30.515270   85500 provision.go:177] copyRemoteCerts
	I1104 12:08:30.515350   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:30.515381   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.518651   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519188   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.519218   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519420   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.519600   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.519777   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.519896   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:30.603170   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:30.626226   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:30.649353   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:08:30.684759   85500 provision.go:87] duration metric: took 447.313588ms to configureAuth
	I1104 12:08:30.684789   85500 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:30.684962   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:30.685029   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.687429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.687815   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.687840   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.688015   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.688192   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688325   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688471   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.688640   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.688830   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.688848   85500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:30.919118   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:30.919142   85500 machine.go:96] duration metric: took 1.041410402s to provisionDockerMachine
	I1104 12:08:30.919156   85500 start.go:293] postStartSetup for "no-preload-908370" (driver="kvm2")
	I1104 12:08:30.919169   85500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:30.919200   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:30.919513   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:30.919538   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.922075   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922485   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.922510   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922615   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.922823   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.922991   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.923107   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.007598   85500 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:31.011558   85500 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:31.011588   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:31.011665   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:31.011766   85500 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:31.011859   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:31.020788   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:31.044379   85500 start.go:296] duration metric: took 125.209775ms for postStartSetup
	I1104 12:08:31.044414   85500 fix.go:56] duration metric: took 19.154609071s for fixHost
	I1104 12:08:31.044442   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.047152   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047426   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.047461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047639   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.047829   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.047976   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.048138   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.048296   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:31.048464   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:31.048474   85500 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:31.157723   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722111.115015995
	
	I1104 12:08:31.157747   85500 fix.go:216] guest clock: 1730722111.115015995
	I1104 12:08:31.157758   85500 fix.go:229] Guest: 2024-11-04 12:08:31.115015995 +0000 UTC Remote: 2024-11-04 12:08:31.044427312 +0000 UTC m=+350.890212897 (delta=70.588683ms)
	I1104 12:08:31.157829   85500 fix.go:200] guest clock delta is within tolerance: 70.588683ms
	I1104 12:08:31.157841   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 19.268070408s
	I1104 12:08:31.157875   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.158131   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:31.160806   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161159   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.161191   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161371   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.161907   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162092   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162174   85500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:31.162217   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.162444   85500 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:31.162470   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.165069   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165316   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165505   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165656   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.165771   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165795   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165842   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166006   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.166024   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166183   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.166327   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166449   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.267746   85500 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:31.273307   85500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:31.410198   85500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:31.416652   85500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:31.416726   85500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:31.432260   85500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:31.432288   85500 start.go:495] detecting cgroup driver to use...
	I1104 12:08:31.432345   85500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:31.453134   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:31.467457   85500 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:31.467516   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:31.481392   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:31.495740   85500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:31.617549   85500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:31.802455   85500 docker.go:233] disabling docker service ...
	I1104 12:08:31.802511   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:31.815534   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:31.827495   85500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:31.938344   85500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:32.042827   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:32.056126   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:32.074274   85500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:08:32.074337   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.084061   85500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:32.084138   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.093533   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.104351   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.113753   85500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:32.123391   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.133089   85500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.149073   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.159888   85500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:32.169208   85500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:32.169279   85500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:32.181319   85500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:32.192472   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:32.300710   85500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:32.386906   85500 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:32.386980   85500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:32.391498   85500 start.go:563] Will wait 60s for crictl version
	I1104 12:08:32.391554   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.395471   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:32.439094   85500 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:32.439168   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.466609   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.499305   85500 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:08:32.500825   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:32.503461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.503827   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:32.503857   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.504039   85500 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:32.508082   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:32.520202   85500 kubeadm.go:883] updating cluster {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:32.520359   85500 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:08:32.520402   85500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:32.553752   85500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:08:32.553781   85500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.553868   85500 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.553853   85500 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.553886   85500 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1104 12:08:32.553925   85500 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.553969   85500 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.553978   85500 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555506   85500 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.555518   85500 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.555510   85500 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.555513   85500 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555591   85500 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.555601   85500 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.555514   85500 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.555658   85500 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1104 12:08:32.706982   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.707334   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.712904   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.721917   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.727829   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.741130   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1104 12:08:32.743716   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.796406   85500 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1104 12:08:32.796448   85500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.796502   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.814658   85500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1104 12:08:32.814697   85500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.814735   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.828308   85500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1104 12:08:32.828362   85500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.828416   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.882090   85500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1104 12:08:32.882140   85500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.882205   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.886473   85500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1104 12:08:32.886518   85500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.886567   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956331   85500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1104 12:08:32.956394   85500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.956414   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.956462   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.956427   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.956521   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.956425   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956506   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061683   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.061723   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061752   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.061790   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.061836   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.061893   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168519   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168596   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.187540   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.188933   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.189015   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.199281   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.285086   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1104 12:08:33.285145   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1104 12:08:33.285245   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:33.285247   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.307647   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1104 12:08:33.307769   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:33.307784   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1104 12:08:33.307818   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.307869   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:33.312697   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1104 12:08:33.312808   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:33.314341   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1104 12:08:33.314358   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314396   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314535   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1104 12:08:33.319449   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1104 12:08:33.319604   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1104 12:08:33.356390   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1104 12:08:33.356478   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1104 12:08:33.356569   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:33.512915   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:31.057314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:33.059599   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:32.350656   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:34.352338   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:31.938577   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.438561   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.938188   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.437856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.938433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.438381   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.938164   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.438120   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.937802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.438365   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.736963   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.42254522s)
	I1104 12:08:35.736994   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1104 12:08:35.737014   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737027   85500 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.380435224s)
	I1104 12:08:35.737058   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1104 12:08:35.737063   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737104   85500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.224165247s)
	I1104 12:08:35.737156   85500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1104 12:08:35.737191   85500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.737267   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:37.693026   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.955928101s)
	I1104 12:08:37.693065   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1104 12:08:37.693086   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:37.693047   85500 ssh_runner.go:235] Completed: which crictl: (1.955763498s)
	I1104 12:08:37.693168   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:37.693131   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:39.156860   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.463570619s)
	I1104 12:08:39.156894   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1104 12:08:39.156922   85500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156930   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.463741565s)
	I1104 12:08:39.156980   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156998   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.625930   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.057567   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.850619   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.851157   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:40.852272   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.938295   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.437646   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.438623   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.938662   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.938048   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.438404   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.938494   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.437875   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.701724   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.544718982s)
	I1104 12:08:42.701751   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1104 12:08:42.701771   85500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701810   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701826   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.544784275s)
	I1104 12:08:42.701912   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:44.666599   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.964646885s)
	I1104 12:08:44.666653   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1104 12:08:44.666723   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.964896366s)
	I1104 12:08:44.666744   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1104 12:08:44.666748   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:44.666765   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.666807   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.671475   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1104 12:08:40.556827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:42.557662   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.058481   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:43.351505   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.851360   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:41.938001   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.438702   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.938239   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.438469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.437744   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.938478   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.437757   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.938035   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.438173   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.627407   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.960571593s)
	I1104 12:08:46.627437   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1104 12:08:46.627473   85500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:46.627537   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:47.273537   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1104 12:08:47.273578   85500 cache_images.go:123] Successfully loaded all cached images
	I1104 12:08:47.273583   85500 cache_images.go:92] duration metric: took 14.719789832s to LoadCachedImages
	I1104 12:08:47.273594   85500 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.31.2 crio true true} ...
	I1104 12:08:47.273686   85500 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-908370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:47.273747   85500 ssh_runner.go:195] Run: crio config
	I1104 12:08:47.319888   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:47.319916   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:47.319929   85500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:47.319952   85500 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-908370 NodeName:no-preload-908370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:08:47.320098   85500 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-908370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:47.320185   85500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:08:47.330284   85500 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:47.330352   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:47.340015   85500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1104 12:08:47.356601   85500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:47.371327   85500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1104 12:08:47.387251   85500 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:47.391041   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:47.402283   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:47.527723   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:47.544017   85500 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370 for IP: 192.168.61.91
	I1104 12:08:47.544041   85500 certs.go:194] generating shared ca certs ...
	I1104 12:08:47.544060   85500 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:47.544244   85500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:47.544309   85500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:47.544322   85500 certs.go:256] generating profile certs ...
	I1104 12:08:47.544412   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.key
	I1104 12:08:47.544485   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key.890cb7f7
	I1104 12:08:47.544522   85500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key
	I1104 12:08:47.544626   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:47.544654   85500 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:47.544663   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:47.544685   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:47.544706   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:47.544726   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:47.544774   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:47.545439   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:47.588488   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:47.631341   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:47.666571   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:47.698703   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 12:08:47.725285   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:08:47.748890   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:47.775589   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:08:47.799507   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:47.823383   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:47.847515   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:47.869937   85500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:47.886413   85500 ssh_runner.go:195] Run: openssl version
	I1104 12:08:47.892041   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:47.901942   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906128   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906182   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.911506   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:47.921614   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:47.932358   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936742   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936801   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.942544   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:47.953063   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:47.963293   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967487   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967547   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.972898   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:47.983089   85500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:47.987532   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:47.993296   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:47.999021   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:48.004741   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:48.010227   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:48.015795   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:48.021356   85500 kubeadm.go:392] StartCluster: {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:48.021431   85500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:48.021471   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.057729   85500 cri.go:89] found id: ""
	I1104 12:08:48.057805   85500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:48.067591   85500 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:48.067610   85500 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:48.067663   85500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:48.076604   85500 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:48.077987   85500 kubeconfig.go:125] found "no-preload-908370" server: "https://192.168.61.91:8443"
	I1104 12:08:48.080042   85500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:48.089796   85500 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.91
	I1104 12:08:48.089826   85500 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:48.089838   85500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:48.089886   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.126920   85500 cri.go:89] found id: ""
	I1104 12:08:48.126998   85500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:48.143409   85500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:48.152783   85500 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:48.152809   85500 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:48.152858   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:48.161458   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:48.161542   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:48.170361   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:48.179217   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:48.179272   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:48.187834   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.196025   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:48.196079   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.204809   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:48.213280   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:48.213338   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:48.222672   85500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:48.232374   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:48.328999   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:49.920988   85500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.591954434s)
	I1104 12:08:49.921028   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.121679   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.181412   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:47.558137   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:49.559576   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:48.349974   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:50.350855   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:46.938016   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.438229   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.437950   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.437785   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.438413   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.938514   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.438658   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.253614   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:50.253693   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.754467   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.254553   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.271229   85500 api_server.go:72] duration metric: took 1.017613016s to wait for apiserver process to appear ...
	I1104 12:08:51.271255   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:51.271278   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:51.271794   85500 api_server.go:269] stopped: https://192.168.61.91:8443/healthz: Get "https://192.168.61.91:8443/healthz": dial tcp 192.168.61.91:8443: connect: connection refused
	I1104 12:08:51.771551   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.499268   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:54.499296   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:54.499310   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.617672   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.617699   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:54.771942   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.776588   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.776615   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:52.056678   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.057081   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:55.272332   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.276594   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:55.276621   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:55.771423   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.776881   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:08:55.783842   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:55.783869   85500 api_server.go:131] duration metric: took 4.512606898s to wait for apiserver health ...
	I1104 12:08:55.783877   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:55.783883   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:55.785665   85500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:52.351019   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.850354   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:51.938323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.438464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.937754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.938586   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.438391   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.938546   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.438433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.787083   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:55.801764   85500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:55.828371   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:55.847602   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:55.847653   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:55.847666   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:55.847679   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:55.847695   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:55.847707   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:55.847724   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:55.847733   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:55.847743   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:55.847753   85500 system_pods.go:74] duration metric: took 19.357387ms to wait for pod list to return data ...
	I1104 12:08:55.847762   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:55.856783   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:55.856820   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:55.856834   85500 node_conditions.go:105] duration metric: took 9.065755ms to run NodePressure ...
	I1104 12:08:55.856856   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:56.143012   85500 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148006   85500 kubeadm.go:739] kubelet initialised
	I1104 12:08:56.148026   85500 kubeadm.go:740] duration metric: took 4.987292ms waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148034   85500 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:56.152359   85500 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.156700   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156725   85500 pod_ready.go:82] duration metric: took 4.341093ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.156734   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156741   85500 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.161402   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161431   85500 pod_ready.go:82] duration metric: took 4.681838ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.161440   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161447   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.165738   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165756   85500 pod_ready.go:82] duration metric: took 4.301197ms for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.165764   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165770   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.232568   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232598   85500 pod_ready.go:82] duration metric: took 66.818411ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.232610   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232620   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.633774   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633804   85500 pod_ready.go:82] duration metric: took 401.173552ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.633815   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633824   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.032392   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032419   85500 pod_ready.go:82] duration metric: took 398.58729ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.032431   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032439   85500 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.431940   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431976   85500 pod_ready.go:82] duration metric: took 399.525162ms for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.431987   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431997   85500 pod_ready.go:39] duration metric: took 1.283953089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:57.432014   85500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:57.444821   85500 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:57.444845   85500 kubeadm.go:597] duration metric: took 9.377227288s to restartPrimaryControlPlane
	I1104 12:08:57.444857   85500 kubeadm.go:394] duration metric: took 9.423506415s to StartCluster
	I1104 12:08:57.444879   85500 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.444965   85500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:57.446715   85500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.446981   85500 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:57.447059   85500 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:57.447172   85500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-908370"
	I1104 12:08:57.447193   85500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-908370"
	W1104 12:08:57.447202   85500 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:57.447207   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:57.447237   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447234   85500 addons.go:69] Setting default-storageclass=true in profile "no-preload-908370"
	I1104 12:08:57.447321   85500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-908370"
	I1104 12:08:57.447222   85500 addons.go:69] Setting metrics-server=true in profile "no-preload-908370"
	I1104 12:08:57.447418   85500 addons.go:234] Setting addon metrics-server=true in "no-preload-908370"
	W1104 12:08:57.447431   85500 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:57.447461   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447708   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447792   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447813   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447748   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447896   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447853   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.449013   85500 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:57.450774   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:57.469657   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I1104 12:08:57.470180   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.470801   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.470830   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.471277   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.471873   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.471924   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.485026   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1104 12:08:57.485330   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1104 12:08:57.485604   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.485772   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.486328   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486363   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486442   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486471   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486735   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.486847   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.487059   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.487337   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.487401   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.490138   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I1104 12:08:57.490611   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.490705   85500 addons.go:234] Setting addon default-storageclass=true in "no-preload-908370"
	W1104 12:08:57.490724   85500 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:57.490748   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.491098   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.491140   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.491153   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.491177   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.491549   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.491718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.493600   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.495883   85500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:57.497200   85500 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.497217   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:57.497245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.500402   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.500934   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.500960   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.501276   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.501483   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.501626   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.501775   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.508615   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I1104 12:08:57.509102   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.509582   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.509606   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.509948   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.510115   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.510809   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1104 12:08:57.511134   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.511818   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.511836   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.511868   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.512486   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.513456   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.513500   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.513921   85500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:57.515417   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:57.515434   85500 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:57.515461   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.518867   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519216   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.519241   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519334   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.519523   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.519662   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.520124   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.529448   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I1104 12:08:57.529979   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.530374   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.530389   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.530756   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.530889   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.532430   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.532832   85500 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.532843   85500 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:57.532857   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.535429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535783   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.535809   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535953   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.536148   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.536245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.536388   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.635571   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:57.654984   85500 node_ready.go:35] waiting up to 6m0s for node "no-preload-908370" to be "Ready" ...
	I1104 12:08:57.722564   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.768850   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.791069   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:57.791090   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:57.875966   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:57.875997   85500 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:57.929834   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:57.929867   85500 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:58.017927   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:58.732204   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732235   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732586   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.732614   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.732624   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732635   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732640   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.733045   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.733108   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.733084   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.736737   85500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014142064s)
	I1104 12:08:58.736783   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.736793   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737035   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737077   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.737090   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.737100   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737737   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.737756   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737770   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.740716   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.740735   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.740963   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.740974   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.740985   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987200   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987227   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987657   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.987667   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.987676   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987685   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987708   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987991   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.988006   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.988018   85500 addons.go:475] Verifying addon metrics-server=true in "no-preload-908370"
	I1104 12:08:58.989756   85500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:58.991022   85500 addons.go:510] duration metric: took 1.54397104s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:59.659284   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.057497   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.057767   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.850793   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.852058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.938312   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.437920   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.937779   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.438511   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.938464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.438108   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.438356   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.158318   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:04.658719   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:05.159526   85500 node_ready.go:49] node "no-preload-908370" has status "Ready":"True"
	I1104 12:09:05.159553   85500 node_ready.go:38] duration metric: took 7.504528904s for node "no-preload-908370" to be "Ready" ...
	I1104 12:09:05.159564   85500 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:09:05.164838   85500 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173888   85500 pod_ready.go:93] pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.173909   85500 pod_ready.go:82] duration metric: took 9.046581ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173919   85500 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:00.556225   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:02.556893   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:05.055827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.351472   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:03.851990   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.938694   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.938445   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.438137   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.937941   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.937760   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.438704   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.680754   85500 pod_ready.go:93] pod "etcd-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.680778   85500 pod_ready.go:82] duration metric: took 506.849735ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.680804   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:07.687108   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:09.687377   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:07.556024   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.055613   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.351230   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:08.351640   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.850364   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.937956   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.438323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.438437   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.937675   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.437868   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.938703   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.438436   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.687315   85500 pod_ready.go:93] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.687338   85500 pod_ready.go:82] duration metric: took 5.006527478s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.687348   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692554   85500 pod_ready.go:93] pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.692583   85500 pod_ready.go:82] duration metric: took 5.227048ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692597   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697109   85500 pod_ready.go:93] pod "kube-proxy-w9hbz" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.697132   85500 pod_ready.go:82] duration metric: took 4.525205ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697153   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701450   85500 pod_ready.go:93] pod "kube-scheduler-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.701472   85500 pod_ready.go:82] duration metric: took 4.310973ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701483   85500 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:12.708631   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.708772   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.056161   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.556380   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.850721   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.851608   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:11.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.437963   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.938515   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.437754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.937856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.438729   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.938439   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.438421   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.938044   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.438456   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.209025   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.707595   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.056226   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.555918   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.350266   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.350329   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:16.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.438266   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.938153   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.437829   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.938469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.438336   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.938284   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.438073   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.937894   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:21.438135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:21.438238   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:21.471463   86402 cri.go:89] found id: ""
	I1104 12:09:21.471495   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.471507   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:21.471515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:21.471568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:21.509336   86402 cri.go:89] found id: ""
	I1104 12:09:21.509363   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.509373   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:21.509381   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:21.509441   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:21.545963   86402 cri.go:89] found id: ""
	I1104 12:09:21.545987   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.545995   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:21.546000   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:21.546059   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:21.580707   86402 cri.go:89] found id: ""
	I1104 12:09:21.580737   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.580748   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:21.580755   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:21.580820   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:21.613763   86402 cri.go:89] found id: ""
	I1104 12:09:21.613791   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.613801   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:21.613809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:21.613872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:21.646559   86402 cri.go:89] found id: ""
	I1104 12:09:21.646583   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.646591   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:21.646597   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:21.646643   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:21.681439   86402 cri.go:89] found id: ""
	I1104 12:09:21.681467   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.681479   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:21.681486   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:21.681554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:21.708045   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.207686   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:22.055637   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.056458   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.350636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:23.850852   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.713875   86402 cri.go:89] found id: ""
	I1104 12:09:21.713899   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.713907   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:21.713915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:21.713925   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:21.763882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:21.763918   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:21.778590   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:21.778615   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:21.892208   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:21.892235   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:21.892250   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:21.965946   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:21.965984   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:24.502992   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:24.514899   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:24.514960   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:24.554466   86402 cri.go:89] found id: ""
	I1104 12:09:24.554491   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.554501   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:24.554510   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:24.554567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:24.591532   86402 cri.go:89] found id: ""
	I1104 12:09:24.591560   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.591572   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:24.591580   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:24.591638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:24.625436   86402 cri.go:89] found id: ""
	I1104 12:09:24.625467   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.625478   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:24.625485   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:24.625544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:24.658317   86402 cri.go:89] found id: ""
	I1104 12:09:24.658346   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.658357   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:24.658364   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:24.658426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:24.692811   86402 cri.go:89] found id: ""
	I1104 12:09:24.692839   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.692850   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:24.692857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:24.692917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:24.729677   86402 cri.go:89] found id: ""
	I1104 12:09:24.729708   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.729719   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:24.729726   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:24.729773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:24.768575   86402 cri.go:89] found id: ""
	I1104 12:09:24.768598   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.768608   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:24.768615   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:24.768681   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:24.802344   86402 cri.go:89] found id: ""
	I1104 12:09:24.802368   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.802375   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:24.802383   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:24.802394   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:24.855882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:24.855915   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:24.869199   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:24.869243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:24.940720   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:24.940744   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:24.940758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:25.016139   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:25.016177   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:26.208422   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.208568   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.557513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:29.055769   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.350171   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.353001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:30.851153   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:27.553297   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:27.566857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:27.566913   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:27.599606   86402 cri.go:89] found id: ""
	I1104 12:09:27.599641   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.599653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:27.599661   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:27.599721   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:27.633818   86402 cri.go:89] found id: ""
	I1104 12:09:27.633841   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.633849   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:27.633854   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:27.633907   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:27.668088   86402 cri.go:89] found id: ""
	I1104 12:09:27.668120   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.668129   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:27.668135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:27.668185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:27.699401   86402 cri.go:89] found id: ""
	I1104 12:09:27.699433   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.699445   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:27.699453   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:27.699511   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:27.731422   86402 cri.go:89] found id: ""
	I1104 12:09:27.731448   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.731459   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:27.731466   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:27.731528   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:27.762808   86402 cri.go:89] found id: ""
	I1104 12:09:27.762839   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.762850   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:27.762857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:27.762917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:27.794729   86402 cri.go:89] found id: ""
	I1104 12:09:27.794757   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.794765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:27.794771   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:27.794826   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:27.825694   86402 cri.go:89] found id: ""
	I1104 12:09:27.825716   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.825724   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:27.825731   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:27.825742   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.862111   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:27.862140   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:27.911169   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:27.911204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:27.924207   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:27.924232   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:27.995123   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:27.995153   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:27.995167   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:30.580831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:30.594901   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:30.594959   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:30.630936   86402 cri.go:89] found id: ""
	I1104 12:09:30.630961   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.630971   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:30.630979   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:30.631034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:30.669288   86402 cri.go:89] found id: ""
	I1104 12:09:30.669311   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.669320   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:30.669328   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:30.669388   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:30.706288   86402 cri.go:89] found id: ""
	I1104 12:09:30.706312   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.706319   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:30.706325   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:30.706384   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:30.739027   86402 cri.go:89] found id: ""
	I1104 12:09:30.739057   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.739069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:30.739078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:30.739137   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:30.772247   86402 cri.go:89] found id: ""
	I1104 12:09:30.772272   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.772280   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:30.772286   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:30.772338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:30.810327   86402 cri.go:89] found id: ""
	I1104 12:09:30.810360   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.810370   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:30.810375   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:30.810426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:30.842241   86402 cri.go:89] found id: ""
	I1104 12:09:30.842271   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.842279   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:30.842285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:30.842332   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:30.877003   86402 cri.go:89] found id: ""
	I1104 12:09:30.877032   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.877043   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:30.877052   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:30.877077   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:30.925783   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:30.925816   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:30.939651   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:30.939680   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:31.029176   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:31.029210   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:31.029244   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:31.116311   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:31.116348   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:30.708451   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:32.708661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:31.056627   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.056743   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.057986   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.350420   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.351206   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.653267   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:33.665813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:33.665878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:33.701812   86402 cri.go:89] found id: ""
	I1104 12:09:33.701839   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.701852   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:33.701860   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:33.701922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:33.738816   86402 cri.go:89] found id: ""
	I1104 12:09:33.738850   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.738861   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:33.738868   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:33.738928   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:33.773936   86402 cri.go:89] found id: ""
	I1104 12:09:33.773960   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.773968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:33.773976   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:33.774031   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:33.808049   86402 cri.go:89] found id: ""
	I1104 12:09:33.808079   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.808091   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:33.808098   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:33.808154   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:33.844276   86402 cri.go:89] found id: ""
	I1104 12:09:33.844303   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.844314   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:33.844322   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:33.844443   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:33.879736   86402 cri.go:89] found id: ""
	I1104 12:09:33.879772   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.879782   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:33.879788   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:33.879843   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:33.913717   86402 cri.go:89] found id: ""
	I1104 12:09:33.913750   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.913761   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:33.913769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:33.913832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:33.949632   86402 cri.go:89] found id: ""
	I1104 12:09:33.949658   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.949667   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:33.949677   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:33.949691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:34.019770   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:34.019790   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:34.019806   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:34.101493   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:34.101524   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:34.146723   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:34.146751   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:34.196295   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:34.196338   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:35.207223   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.207576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.208091   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.556228   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.556548   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.850907   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.852870   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:36.709951   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:36.724723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:36.724782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:36.777406   86402 cri.go:89] found id: ""
	I1104 12:09:36.777440   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.777451   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:36.777459   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:36.777520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:36.834486   86402 cri.go:89] found id: ""
	I1104 12:09:36.834516   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.834527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:36.834535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:36.834641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:36.868828   86402 cri.go:89] found id: ""
	I1104 12:09:36.868853   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.868861   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:36.868867   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:36.868912   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:36.900942   86402 cri.go:89] found id: ""
	I1104 12:09:36.900972   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.900980   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:36.900986   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:36.901043   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:36.933215   86402 cri.go:89] found id: ""
	I1104 12:09:36.933265   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.933276   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:36.933282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:36.933330   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:36.966753   86402 cri.go:89] found id: ""
	I1104 12:09:36.966776   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.966784   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:36.966789   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:36.966850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:37.000050   86402 cri.go:89] found id: ""
	I1104 12:09:37.000074   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.000082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:37.000087   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:37.000144   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:37.033252   86402 cri.go:89] found id: ""
	I1104 12:09:37.033283   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.033295   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:37.033305   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:37.033328   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:37.085351   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:37.085383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:37.098556   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:37.098582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:37.167489   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:37.167512   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:37.167525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:37.243292   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:37.243325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:39.781468   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:39.795630   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:39.795756   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:39.833745   86402 cri.go:89] found id: ""
	I1104 12:09:39.833779   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.833791   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:39.833798   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:39.833862   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:39.870075   86402 cri.go:89] found id: ""
	I1104 12:09:39.870096   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.870106   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:39.870119   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:39.870173   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:39.905807   86402 cri.go:89] found id: ""
	I1104 12:09:39.905836   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.905846   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:39.905854   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:39.905916   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:39.941890   86402 cri.go:89] found id: ""
	I1104 12:09:39.941914   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.941922   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:39.941932   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:39.941978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:39.979123   86402 cri.go:89] found id: ""
	I1104 12:09:39.979150   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.979159   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:39.979165   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:39.979220   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:40.014748   86402 cri.go:89] found id: ""
	I1104 12:09:40.014777   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.014785   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:40.014791   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:40.014882   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:40.049977   86402 cri.go:89] found id: ""
	I1104 12:09:40.050004   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.050014   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:40.050021   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:40.050100   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:40.085630   86402 cri.go:89] found id: ""
	I1104 12:09:40.085663   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.085674   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:40.085685   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:40.085701   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:40.166611   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:40.166650   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:40.203117   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:40.203155   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:40.256233   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:40.256267   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:40.270009   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:40.270042   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:40.338672   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:41.707618   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:43.708915   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.055555   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.060949   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.351562   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.851599   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.839402   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:42.852881   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:42.852947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:42.884587   86402 cri.go:89] found id: ""
	I1104 12:09:42.884614   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.884624   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:42.884631   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:42.884690   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:42.915286   86402 cri.go:89] found id: ""
	I1104 12:09:42.915316   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.915327   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:42.915337   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:42.915399   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:42.945827   86402 cri.go:89] found id: ""
	I1104 12:09:42.945857   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.945868   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:42.945875   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:42.945934   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:42.982662   86402 cri.go:89] found id: ""
	I1104 12:09:42.982693   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.982703   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:42.982712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:42.982788   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:43.015337   86402 cri.go:89] found id: ""
	I1104 12:09:43.015371   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.015382   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:43.015390   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:43.015453   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:43.048235   86402 cri.go:89] found id: ""
	I1104 12:09:43.048262   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.048270   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:43.048276   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:43.048351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:43.080636   86402 cri.go:89] found id: ""
	I1104 12:09:43.080668   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.080679   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:43.080687   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:43.080746   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:43.113986   86402 cri.go:89] found id: ""
	I1104 12:09:43.114011   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.114019   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:43.114027   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:43.114038   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:43.165356   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:43.165390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:43.179167   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:43.179200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:43.250054   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:43.250083   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:43.250098   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:43.328970   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:43.329002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:45.869879   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:45.883262   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:45.883359   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:45.921978   86402 cri.go:89] found id: ""
	I1104 12:09:45.922003   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.922011   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:45.922016   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:45.922076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:45.954668   86402 cri.go:89] found id: ""
	I1104 12:09:45.954697   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.954710   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:45.954717   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:45.954787   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:45.987793   86402 cri.go:89] found id: ""
	I1104 12:09:45.987826   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.987837   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:45.987845   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:45.987906   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:46.028517   86402 cri.go:89] found id: ""
	I1104 12:09:46.028550   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.028558   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:46.028563   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:46.028621   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:46.063832   86402 cri.go:89] found id: ""
	I1104 12:09:46.063859   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.063870   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:46.063878   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:46.063942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:46.099981   86402 cri.go:89] found id: ""
	I1104 12:09:46.100011   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.100027   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:46.100036   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:46.100169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:46.133060   86402 cri.go:89] found id: ""
	I1104 12:09:46.133083   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.133092   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:46.133099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:46.133165   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:46.170559   86402 cri.go:89] found id: ""
	I1104 12:09:46.170583   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.170591   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:46.170599   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:46.170610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:46.253202   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:46.253253   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:46.288468   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:46.288498   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:46.339322   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:46.339354   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:46.353020   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:46.353049   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:46.420328   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:46.208695   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:46.556598   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.057461   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:47.351225   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.352737   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.920709   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:48.933443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:48.933507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:48.964736   86402 cri.go:89] found id: ""
	I1104 12:09:48.964759   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.964770   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:48.964777   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:48.964837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:48.996646   86402 cri.go:89] found id: ""
	I1104 12:09:48.996670   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.996679   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:48.996684   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:48.996734   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:49.028899   86402 cri.go:89] found id: ""
	I1104 12:09:49.028942   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.028951   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:49.028957   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:49.029015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:49.065032   86402 cri.go:89] found id: ""
	I1104 12:09:49.065056   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.065064   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:49.065075   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:49.065120   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:49.097159   86402 cri.go:89] found id: ""
	I1104 12:09:49.097183   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.097191   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:49.097196   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:49.097269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:49.131578   86402 cri.go:89] found id: ""
	I1104 12:09:49.131608   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.131619   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:49.131626   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:49.131684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:49.164307   86402 cri.go:89] found id: ""
	I1104 12:09:49.164339   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.164358   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:49.164367   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:49.164430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:49.197171   86402 cri.go:89] found id: ""
	I1104 12:09:49.197199   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.197210   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:49.197220   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:49.197251   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:49.210327   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:49.210355   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:49.280226   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:49.280251   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:49.280262   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:49.367655   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:49.367691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:49.408424   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:49.408452   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:50.708963   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:53.207337   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.555800   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.055622   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.850949   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.350551   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.958148   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:51.970451   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:51.970521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:52.000896   86402 cri.go:89] found id: ""
	I1104 12:09:52.000929   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.000940   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:52.000948   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:52.001023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:52.034122   86402 cri.go:89] found id: ""
	I1104 12:09:52.034150   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.034161   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:52.034168   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:52.034227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:52.070834   86402 cri.go:89] found id: ""
	I1104 12:09:52.070872   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.070884   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:52.070891   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:52.070950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:52.103730   86402 cri.go:89] found id: ""
	I1104 12:09:52.103758   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.103766   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:52.103772   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:52.103832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:52.135980   86402 cri.go:89] found id: ""
	I1104 12:09:52.136006   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.136014   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:52.136020   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:52.136081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:52.168903   86402 cri.go:89] found id: ""
	I1104 12:09:52.168928   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.168936   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:52.168942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:52.169001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:52.199499   86402 cri.go:89] found id: ""
	I1104 12:09:52.199529   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.199539   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:52.199546   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:52.199610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:52.232566   86402 cri.go:89] found id: ""
	I1104 12:09:52.232603   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.232615   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:52.232626   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:52.232640   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:52.282140   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:52.282180   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:52.295079   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:52.295110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:52.364061   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:52.364087   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:52.364102   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:52.437868   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:52.437901   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:54.978182   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:54.991002   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:54.991068   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:55.023628   86402 cri.go:89] found id: ""
	I1104 12:09:55.023656   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.023663   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:55.023669   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:55.023715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:55.058524   86402 cri.go:89] found id: ""
	I1104 12:09:55.058548   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.058557   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:55.058564   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:55.058634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:55.095730   86402 cri.go:89] found id: ""
	I1104 12:09:55.095760   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.095772   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:55.095779   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:55.095837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:55.128341   86402 cri.go:89] found id: ""
	I1104 12:09:55.128365   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.128373   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:55.128379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:55.128438   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:55.160655   86402 cri.go:89] found id: ""
	I1104 12:09:55.160681   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.160693   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:55.160700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:55.160754   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:55.194050   86402 cri.go:89] found id: ""
	I1104 12:09:55.194077   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.194086   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:55.194091   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:55.194138   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:55.227655   86402 cri.go:89] found id: ""
	I1104 12:09:55.227694   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.227705   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:55.227712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:55.227810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:55.261106   86402 cri.go:89] found id: ""
	I1104 12:09:55.261137   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.261147   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:55.261157   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:55.261171   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:55.335577   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:55.335598   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:55.335610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:55.421339   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:55.421375   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:55.459936   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:55.459967   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:55.509346   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:55.509382   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:55.208869   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:57.707576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:59.708019   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.555996   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.556335   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.851071   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.851254   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.023608   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:58.036540   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:58.036599   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:58.075104   86402 cri.go:89] found id: ""
	I1104 12:09:58.075182   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.075198   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:58.075207   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:58.075271   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:58.109910   86402 cri.go:89] found id: ""
	I1104 12:09:58.109949   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.109961   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:58.109968   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:58.110038   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:58.142829   86402 cri.go:89] found id: ""
	I1104 12:09:58.142854   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.142865   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:58.142873   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:58.142924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:58.178125   86402 cri.go:89] found id: ""
	I1104 12:09:58.178153   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.178161   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:58.178168   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:58.178239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:58.214117   86402 cri.go:89] found id: ""
	I1104 12:09:58.214146   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.214156   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:58.214162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:58.214213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:58.244728   86402 cri.go:89] found id: ""
	I1104 12:09:58.244751   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.244759   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:58.244765   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:58.244809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:58.275542   86402 cri.go:89] found id: ""
	I1104 12:09:58.275568   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.275576   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:58.275582   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:58.275630   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:58.314909   86402 cri.go:89] found id: ""
	I1104 12:09:58.314935   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.314943   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:58.314952   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:58.314962   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:58.364361   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:58.364390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.378483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:58.378517   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:58.442012   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:58.442033   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:58.442045   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:58.517260   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:58.517298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.057203   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:01.069937   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:01.070008   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:01.101672   86402 cri.go:89] found id: ""
	I1104 12:10:01.101698   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.101709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:01.101716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:01.101779   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:01.134672   86402 cri.go:89] found id: ""
	I1104 12:10:01.134701   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.134712   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:01.134719   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:01.134789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:01.167784   86402 cri.go:89] found id: ""
	I1104 12:10:01.167833   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.167845   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:01.167853   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:01.167945   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:01.201218   86402 cri.go:89] found id: ""
	I1104 12:10:01.201260   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.201271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:01.201281   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:01.201338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:01.234964   86402 cri.go:89] found id: ""
	I1104 12:10:01.234991   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.235000   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:01.235007   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:01.235069   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:01.267809   86402 cri.go:89] found id: ""
	I1104 12:10:01.267848   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.267881   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:01.267890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:01.267942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:01.303567   86402 cri.go:89] found id: ""
	I1104 12:10:01.303590   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.303598   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:01.303604   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:01.303648   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:01.342059   86402 cri.go:89] found id: ""
	I1104 12:10:01.342088   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.342099   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:01.342109   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:01.342142   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:01.354845   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:01.354867   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:01.423426   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:01.423447   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:01.423459   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:01.498979   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:01.499018   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.537658   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:01.537691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:02.208192   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.209058   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.055266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.056457   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.350820   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.850435   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.088653   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:04.103506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:04.103576   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:04.137574   86402 cri.go:89] found id: ""
	I1104 12:10:04.137602   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.137612   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:04.137620   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:04.137684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:04.177624   86402 cri.go:89] found id: ""
	I1104 12:10:04.177662   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.177673   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:04.177681   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:04.177750   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:04.213829   86402 cri.go:89] found id: ""
	I1104 12:10:04.213850   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.213862   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:04.213870   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:04.213929   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:04.251112   86402 cri.go:89] found id: ""
	I1104 12:10:04.251143   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.251154   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:04.251162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:04.251227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:04.286005   86402 cri.go:89] found id: ""
	I1104 12:10:04.286036   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.286046   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:04.286053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:04.286118   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:04.317628   86402 cri.go:89] found id: ""
	I1104 12:10:04.317656   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.317667   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:04.317674   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:04.317742   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:04.351663   86402 cri.go:89] found id: ""
	I1104 12:10:04.351687   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.351695   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:04.351700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:04.351755   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:04.385818   86402 cri.go:89] found id: ""
	I1104 12:10:04.385842   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.385850   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:04.385858   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:04.385880   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:04.467141   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:04.467179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:04.503669   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:04.503700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.557237   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:04.557303   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:04.570484   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:04.570520   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:04.635099   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:06.708483   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:09.207171   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:05.556612   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.056976   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:06.350422   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.351537   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.351962   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:07.135741   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:07.148039   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:07.148132   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:07.185171   86402 cri.go:89] found id: ""
	I1104 12:10:07.185196   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.185205   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:07.185211   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:07.185280   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:07.217097   86402 cri.go:89] found id: ""
	I1104 12:10:07.217126   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.217137   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:07.217144   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:07.217204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:07.250079   86402 cri.go:89] found id: ""
	I1104 12:10:07.250108   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.250116   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:07.250121   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:07.250169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:07.283423   86402 cri.go:89] found id: ""
	I1104 12:10:07.283463   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.283475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:07.283482   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:07.283554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:07.316461   86402 cri.go:89] found id: ""
	I1104 12:10:07.316490   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.316507   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:07.316513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:07.316569   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:07.361981   86402 cri.go:89] found id: ""
	I1104 12:10:07.362010   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.362018   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:07.362024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:07.362087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:07.397834   86402 cri.go:89] found id: ""
	I1104 12:10:07.397867   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.397878   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:07.397886   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:07.397948   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:07.429379   86402 cri.go:89] found id: ""
	I1104 12:10:07.429407   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.429416   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:07.429425   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:07.429438   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:07.495294   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.495322   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:07.495334   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:07.578504   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:07.578546   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:07.617172   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:07.617201   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:07.667168   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:07.667204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.181802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:10.196017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:10.196084   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:10.228243   86402 cri.go:89] found id: ""
	I1104 12:10:10.228272   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.228282   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:10.228289   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:10.228347   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:10.262110   86402 cri.go:89] found id: ""
	I1104 12:10:10.262143   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.262152   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:10.262161   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:10.262218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:10.297776   86402 cri.go:89] found id: ""
	I1104 12:10:10.297812   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.297823   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:10.297830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:10.297877   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:10.332645   86402 cri.go:89] found id: ""
	I1104 12:10:10.332672   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.332680   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:10.332685   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:10.332730   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:10.366703   86402 cri.go:89] found id: ""
	I1104 12:10:10.366735   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.366746   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:10.366754   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:10.366809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:10.399500   86402 cri.go:89] found id: ""
	I1104 12:10:10.399526   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.399534   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:10.399539   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:10.399634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:10.434898   86402 cri.go:89] found id: ""
	I1104 12:10:10.434932   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.434943   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:10.434951   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:10.435022   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:10.472159   86402 cri.go:89] found id: ""
	I1104 12:10:10.472189   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.472201   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:10.472225   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:10.472246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:10.528710   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:10.528769   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.541943   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:10.541973   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:10.621819   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:10.621843   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:10.621855   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:10.698301   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:10.698335   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:11.208069   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.707594   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.556520   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.056160   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:15.056984   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:12.851001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:14.851591   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.235151   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:13.247511   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:13.247585   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:13.278546   86402 cri.go:89] found id: ""
	I1104 12:10:13.278576   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.278586   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:13.278592   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:13.278655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:13.310297   86402 cri.go:89] found id: ""
	I1104 12:10:13.310325   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.310334   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:13.310340   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:13.310394   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:13.344110   86402 cri.go:89] found id: ""
	I1104 12:10:13.344139   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.344150   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:13.344158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:13.344210   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:13.379778   86402 cri.go:89] found id: ""
	I1104 12:10:13.379806   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.379817   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:13.379824   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:13.379872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:13.411763   86402 cri.go:89] found id: ""
	I1104 12:10:13.411795   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.411806   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:13.411813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:13.411872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:13.445192   86402 cri.go:89] found id: ""
	I1104 12:10:13.445217   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.445235   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:13.445243   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:13.445297   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:13.478518   86402 cri.go:89] found id: ""
	I1104 12:10:13.478549   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.478561   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:13.478569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:13.478710   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:13.513852   86402 cri.go:89] found id: ""
	I1104 12:10:13.513878   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.513886   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:13.513895   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:13.513909   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:13.590413   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:13.590439   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:13.590454   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:13.664575   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:13.664608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.700616   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:13.700644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:13.751113   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:13.751147   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:16.264311   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:16.277443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:16.277508   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:16.309983   86402 cri.go:89] found id: ""
	I1104 12:10:16.310010   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.310020   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:16.310025   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:16.310073   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:16.358281   86402 cri.go:89] found id: ""
	I1104 12:10:16.358305   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.358312   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:16.358317   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:16.358376   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:16.394455   86402 cri.go:89] found id: ""
	I1104 12:10:16.394485   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.394497   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:16.394503   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:16.394571   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:16.430606   86402 cri.go:89] found id: ""
	I1104 12:10:16.430638   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.430648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:16.430655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:16.430716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:16.464402   86402 cri.go:89] found id: ""
	I1104 12:10:16.464439   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.464450   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:16.464458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:16.464517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:16.497985   86402 cri.go:89] found id: ""
	I1104 12:10:16.498009   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.498017   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:16.498022   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:16.498076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:16.531255   86402 cri.go:89] found id: ""
	I1104 12:10:16.531289   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.531301   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:16.531309   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:16.531372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:16.566176   86402 cri.go:89] found id: ""
	I1104 12:10:16.566204   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.566213   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:16.566228   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:16.566243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:16.634157   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:16.634196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:16.634218   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:16.206939   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:18.208360   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.555513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.556105   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.351026   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.351294   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:16.710518   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:16.710550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:16.746572   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:16.746608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:16.797146   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:16.797179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.310286   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:19.323409   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:19.323473   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:19.360864   86402 cri.go:89] found id: ""
	I1104 12:10:19.360893   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.360902   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:19.360907   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:19.360962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:19.400127   86402 cri.go:89] found id: ""
	I1104 12:10:19.400155   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.400167   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:19.400174   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:19.400230   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:19.433023   86402 cri.go:89] found id: ""
	I1104 12:10:19.433049   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.433057   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:19.433062   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:19.433123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:19.467786   86402 cri.go:89] found id: ""
	I1104 12:10:19.467810   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.467819   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:19.467825   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:19.467875   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:19.498411   86402 cri.go:89] found id: ""
	I1104 12:10:19.498436   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.498444   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:19.498455   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:19.498502   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:19.532146   86402 cri.go:89] found id: ""
	I1104 12:10:19.532171   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.532179   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:19.532184   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:19.532234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:19.567271   86402 cri.go:89] found id: ""
	I1104 12:10:19.567294   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.567302   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:19.567308   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:19.567369   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:19.608233   86402 cri.go:89] found id: ""
	I1104 12:10:19.608265   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.608279   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:19.608289   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:19.608304   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:19.649039   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:19.649071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:19.702129   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:19.702168   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.716749   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:19.716776   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:19.787538   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:19.787560   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:19.787572   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:20.208694   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.708289   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.556715   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.557173   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.851010   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.852944   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.368982   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:22.382889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:22.382962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:22.418672   86402 cri.go:89] found id: ""
	I1104 12:10:22.418698   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.418709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:22.418716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:22.418782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:22.451675   86402 cri.go:89] found id: ""
	I1104 12:10:22.451704   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.451715   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:22.451723   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:22.451785   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:22.488520   86402 cri.go:89] found id: ""
	I1104 12:10:22.488549   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.488561   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:22.488567   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:22.488631   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:22.530288   86402 cri.go:89] found id: ""
	I1104 12:10:22.530312   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.530321   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:22.530326   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:22.530382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:22.564929   86402 cri.go:89] found id: ""
	I1104 12:10:22.564958   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.564970   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:22.564977   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:22.565036   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:22.598015   86402 cri.go:89] found id: ""
	I1104 12:10:22.598042   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.598051   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:22.598056   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:22.598160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:22.632894   86402 cri.go:89] found id: ""
	I1104 12:10:22.632921   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.632930   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:22.632935   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:22.633001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:22.665194   86402 cri.go:89] found id: ""
	I1104 12:10:22.665218   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.665245   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:22.665257   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:22.665272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:22.717731   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:22.717763   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:22.732671   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:22.732698   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:22.823908   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:22.823946   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:22.823963   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.907812   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:22.907848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.449308   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:25.461694   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:25.461751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:25.493036   86402 cri.go:89] found id: ""
	I1104 12:10:25.493061   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.493068   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:25.493075   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:25.493122   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:25.525084   86402 cri.go:89] found id: ""
	I1104 12:10:25.525116   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.525128   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:25.525135   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:25.525196   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:25.561380   86402 cri.go:89] found id: ""
	I1104 12:10:25.561424   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.561436   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:25.561444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:25.561499   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:25.595429   86402 cri.go:89] found id: ""
	I1104 12:10:25.595453   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.595468   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:25.595474   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:25.595521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:25.627409   86402 cri.go:89] found id: ""
	I1104 12:10:25.627436   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.627445   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:25.627450   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:25.627497   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:25.661048   86402 cri.go:89] found id: ""
	I1104 12:10:25.661073   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.661082   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:25.661088   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:25.661135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:25.698882   86402 cri.go:89] found id: ""
	I1104 12:10:25.698912   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.698920   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:25.698926   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:25.698978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:25.733355   86402 cri.go:89] found id: ""
	I1104 12:10:25.733397   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.733409   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:25.733420   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:25.733435   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:25.784871   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:25.784908   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:25.798715   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:25.798740   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:25.870362   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:25.870383   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:25.870397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:25.950565   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:25.950598   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.209496   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:27.706991   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:29.708209   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.055597   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.055845   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:30.056584   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.351027   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.851204   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.488258   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:28.506058   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:28.506114   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:28.566325   86402 cri.go:89] found id: ""
	I1104 12:10:28.566351   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.566358   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:28.566364   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:28.566413   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:28.612753   86402 cri.go:89] found id: ""
	I1104 12:10:28.612781   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.612790   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:28.612796   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:28.612854   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:28.647082   86402 cri.go:89] found id: ""
	I1104 12:10:28.647109   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.647120   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:28.647128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:28.647205   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:28.683197   86402 cri.go:89] found id: ""
	I1104 12:10:28.683227   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.683239   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:28.683247   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:28.683299   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:28.718139   86402 cri.go:89] found id: ""
	I1104 12:10:28.718175   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.718186   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:28.718194   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:28.718253   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:28.749689   86402 cri.go:89] found id: ""
	I1104 12:10:28.749721   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.749732   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:28.749739   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:28.749803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:28.786824   86402 cri.go:89] found id: ""
	I1104 12:10:28.786851   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.786859   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:28.786864   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:28.786925   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:28.822833   86402 cri.go:89] found id: ""
	I1104 12:10:28.822856   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.822865   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:28.822872   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:28.822884   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:28.835267   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:28.835298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:28.900051   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:28.900076   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:28.900089   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:28.979867   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:28.979912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:29.017294   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:29.017327   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:31.569559   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:31.582065   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:31.582136   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:31.614924   86402 cri.go:89] found id: ""
	I1104 12:10:31.614952   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.614960   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:31.614966   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:31.615029   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:31.647178   86402 cri.go:89] found id: ""
	I1104 12:10:31.647204   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.647212   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:31.647218   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:31.647277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:31.678723   86402 cri.go:89] found id: ""
	I1104 12:10:31.678749   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.678761   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:31.678769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:31.678819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:31.709787   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.208234   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:32.555978   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.557026   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.351700   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:33.850976   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:35.851636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.713013   86402 cri.go:89] found id: ""
	I1104 12:10:31.713036   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.713043   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:31.713048   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:31.713092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:31.746564   86402 cri.go:89] found id: ""
	I1104 12:10:31.746591   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.746600   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:31.746605   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:31.746658   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:31.779559   86402 cri.go:89] found id: ""
	I1104 12:10:31.779586   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.779594   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:31.779601   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:31.779652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:31.812047   86402 cri.go:89] found id: ""
	I1104 12:10:31.812076   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.812087   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:31.812094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:31.812163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:31.845479   86402 cri.go:89] found id: ""
	I1104 12:10:31.845510   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.845522   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:31.845532   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:31.845551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:31.909399   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:31.909423   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:31.909434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:31.985994   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:31.986031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:32.023222   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:32.023255   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:32.074429   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:32.074467   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.588202   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:34.600925   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:34.600994   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:34.632718   86402 cri.go:89] found id: ""
	I1104 12:10:34.632743   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.632754   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:34.632763   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:34.632813   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:34.665553   86402 cri.go:89] found id: ""
	I1104 12:10:34.665576   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.665585   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:34.665590   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:34.665641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:34.700059   86402 cri.go:89] found id: ""
	I1104 12:10:34.700081   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.700089   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:34.700094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:34.700141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:34.732940   86402 cri.go:89] found id: ""
	I1104 12:10:34.732962   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.732970   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:34.732978   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:34.733023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:34.764580   86402 cri.go:89] found id: ""
	I1104 12:10:34.764610   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.764618   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:34.764624   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:34.764680   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:34.798030   86402 cri.go:89] found id: ""
	I1104 12:10:34.798053   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.798061   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:34.798067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:34.798115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:34.829847   86402 cri.go:89] found id: ""
	I1104 12:10:34.829876   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.829884   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:34.829889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:34.829946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:34.862764   86402 cri.go:89] found id: ""
	I1104 12:10:34.862792   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.862804   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:34.862815   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:34.862828   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:34.912367   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:34.912397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.925347   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:34.925383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:34.990459   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:34.990486   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:34.990502   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:35.066765   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:35.066796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:36.706912   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.707144   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.056279   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:39.555433   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.349986   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:40.354694   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.602696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:37.615041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:37.615115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:37.646872   86402 cri.go:89] found id: ""
	I1104 12:10:37.646900   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.646911   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:37.646918   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:37.646977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:37.679770   86402 cri.go:89] found id: ""
	I1104 12:10:37.679797   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.679805   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:37.679810   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:37.679867   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:37.711693   86402 cri.go:89] found id: ""
	I1104 12:10:37.711720   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.711733   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:37.711743   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:37.711803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:37.746605   86402 cri.go:89] found id: ""
	I1104 12:10:37.746636   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.746648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:37.746656   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:37.746716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:37.778983   86402 cri.go:89] found id: ""
	I1104 12:10:37.779010   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.779020   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:37.779026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:37.779086   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:37.813293   86402 cri.go:89] found id: ""
	I1104 12:10:37.813321   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.813330   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:37.813335   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:37.813387   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:37.846181   86402 cri.go:89] found id: ""
	I1104 12:10:37.846209   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.846219   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:37.846226   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:37.846287   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:37.877485   86402 cri.go:89] found id: ""
	I1104 12:10:37.877520   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.877531   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:37.877541   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:37.877558   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:37.926704   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:37.926733   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:37.939771   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:37.939796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:38.003762   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:38.003783   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:38.003800   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:38.085419   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:38.085456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.625351   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:40.637380   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:40.637459   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:40.670274   86402 cri.go:89] found id: ""
	I1104 12:10:40.670303   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.670315   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:40.670322   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:40.670382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:40.703383   86402 cri.go:89] found id: ""
	I1104 12:10:40.703414   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.703427   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:40.703434   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:40.703481   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:40.739549   86402 cri.go:89] found id: ""
	I1104 12:10:40.739576   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.739586   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:40.739594   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:40.739651   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:40.775466   86402 cri.go:89] found id: ""
	I1104 12:10:40.775492   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.775502   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:40.775513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:40.775567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:40.810486   86402 cri.go:89] found id: ""
	I1104 12:10:40.810515   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.810525   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:40.810533   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:40.810593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:40.844277   86402 cri.go:89] found id: ""
	I1104 12:10:40.844309   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.844321   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:40.844329   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:40.844391   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:40.878699   86402 cri.go:89] found id: ""
	I1104 12:10:40.878728   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.878739   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:40.878746   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:40.878804   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:40.913888   86402 cri.go:89] found id: ""
	I1104 12:10:40.913913   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.913921   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:40.913929   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:40.913939   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:40.966854   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:40.966892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:40.980483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:40.980510   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:41.046059   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:41.046085   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:41.046100   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:41.129746   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:41.129779   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.707964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.207804   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.057019   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.555947   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.850057   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.851467   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.667029   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:43.680024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:43.680092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:43.714185   86402 cri.go:89] found id: ""
	I1104 12:10:43.714218   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.714227   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:43.714235   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:43.714294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:43.749493   86402 cri.go:89] found id: ""
	I1104 12:10:43.749515   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.749523   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:43.749529   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:43.749588   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:43.785400   86402 cri.go:89] found id: ""
	I1104 12:10:43.785426   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.785437   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:43.785444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:43.785507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:43.818465   86402 cri.go:89] found id: ""
	I1104 12:10:43.818505   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.818517   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:43.818524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:43.818573   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:43.850232   86402 cri.go:89] found id: ""
	I1104 12:10:43.850262   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.850272   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:43.850279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:43.850337   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:43.882806   86402 cri.go:89] found id: ""
	I1104 12:10:43.882840   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.882851   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:43.882859   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:43.882920   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:43.919449   86402 cri.go:89] found id: ""
	I1104 12:10:43.919476   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.919486   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:43.919493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:43.919556   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:43.953761   86402 cri.go:89] found id: ""
	I1104 12:10:43.953791   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.953801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:43.953812   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:43.953825   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:44.005559   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:44.005594   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:44.019431   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:44.019456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:44.094436   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:44.094457   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:44.094470   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:44.174026   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:44.174061   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:45.707449   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:47.709901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.557050   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:48.557552   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.851720   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:49.350269   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.712021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:46.724258   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:46.724318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:46.754472   86402 cri.go:89] found id: ""
	I1104 12:10:46.754501   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.754510   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:46.754515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:46.754563   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:46.790184   86402 cri.go:89] found id: ""
	I1104 12:10:46.790209   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.790219   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:46.790226   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:46.790284   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:46.824840   86402 cri.go:89] found id: ""
	I1104 12:10:46.824865   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.824875   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:46.824882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:46.824952   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:46.857295   86402 cri.go:89] found id: ""
	I1104 12:10:46.857329   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.857360   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:46.857369   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:46.857430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:46.889540   86402 cri.go:89] found id: ""
	I1104 12:10:46.889571   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.889582   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:46.889588   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:46.889652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:46.930165   86402 cri.go:89] found id: ""
	I1104 12:10:46.930195   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.930204   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:46.930210   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:46.930266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:46.965964   86402 cri.go:89] found id: ""
	I1104 12:10:46.965994   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.966006   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:46.966013   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:46.966060   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:47.002700   86402 cri.go:89] found id: ""
	I1104 12:10:47.002732   86402 logs.go:282] 0 containers: []
	W1104 12:10:47.002741   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:47.002749   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:47.002760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:47.056362   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:47.056392   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:47.070447   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:47.070472   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:47.143207   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:47.143240   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:47.143256   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:47.223985   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:47.224015   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:49.765870   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:49.778288   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:49.778352   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:49.812012   86402 cri.go:89] found id: ""
	I1104 12:10:49.812044   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.812054   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:49.812064   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:49.812115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:49.847260   86402 cri.go:89] found id: ""
	I1104 12:10:49.847290   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.847301   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:49.847308   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:49.847361   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:49.877397   86402 cri.go:89] found id: ""
	I1104 12:10:49.877419   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.877427   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:49.877432   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:49.877486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:49.912453   86402 cri.go:89] found id: ""
	I1104 12:10:49.912484   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.912499   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:49.912506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:49.912572   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:49.948374   86402 cri.go:89] found id: ""
	I1104 12:10:49.948404   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.948416   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:49.948422   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:49.948488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:49.982190   86402 cri.go:89] found id: ""
	I1104 12:10:49.982216   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.982228   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:49.982236   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:49.982294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:50.014396   86402 cri.go:89] found id: ""
	I1104 12:10:50.014426   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.014437   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:50.014445   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:50.014507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:50.051770   86402 cri.go:89] found id: ""
	I1104 12:10:50.051793   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.051801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:50.051809   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:50.051820   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:50.116158   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:50.116185   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:50.116202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:50.194382   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:50.194431   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:50.235957   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:50.235983   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:50.290720   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:50.290750   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:50.207837   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.207972   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.208026   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.055965   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:53.056014   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:55.056318   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.850513   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.351193   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.805144   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:52.817686   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:52.817753   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:52.852470   86402 cri.go:89] found id: ""
	I1104 12:10:52.852492   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.852546   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:52.852559   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:52.852603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:52.889682   86402 cri.go:89] found id: ""
	I1104 12:10:52.889705   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.889714   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:52.889720   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:52.889773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:52.924490   86402 cri.go:89] found id: ""
	I1104 12:10:52.924525   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.924537   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:52.924544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:52.924604   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:52.957055   86402 cri.go:89] found id: ""
	I1104 12:10:52.957085   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.957094   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:52.957099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:52.957143   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:52.993379   86402 cri.go:89] found id: ""
	I1104 12:10:52.993411   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.993423   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:52.993430   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:52.993493   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:53.027365   86402 cri.go:89] found id: ""
	I1104 12:10:53.027398   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.027407   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:53.027412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:53.027488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:53.061048   86402 cri.go:89] found id: ""
	I1104 12:10:53.061074   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.061082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:53.061089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:53.061163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:53.101867   86402 cri.go:89] found id: ""
	I1104 12:10:53.101894   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.101904   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:53.101915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:53.101927   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:53.152314   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:53.152351   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:53.165630   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:53.165657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:53.239717   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:53.239739   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:53.239753   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:53.318140   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:53.318186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:55.857443   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:55.869524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:55.869608   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:55.900719   86402 cri.go:89] found id: ""
	I1104 12:10:55.900743   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.900753   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:55.900761   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:55.900821   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:55.932699   86402 cri.go:89] found id: ""
	I1104 12:10:55.932724   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.932734   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:55.932741   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:55.932798   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:55.964729   86402 cri.go:89] found id: ""
	I1104 12:10:55.964758   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.964767   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:55.964775   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:55.964823   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:55.997870   86402 cri.go:89] found id: ""
	I1104 12:10:55.997897   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.997907   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:55.997915   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:55.997977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:56.031707   86402 cri.go:89] found id: ""
	I1104 12:10:56.031736   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.031744   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:56.031749   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:56.031805   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:56.070839   86402 cri.go:89] found id: ""
	I1104 12:10:56.070863   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.070871   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:56.070877   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:56.070922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:56.109364   86402 cri.go:89] found id: ""
	I1104 12:10:56.109393   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.109404   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:56.109412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:56.109474   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:56.143369   86402 cri.go:89] found id: ""
	I1104 12:10:56.143402   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.143414   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:56.143424   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:56.143437   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:56.156924   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:56.156952   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:56.223624   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:56.223647   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:56.223659   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:56.302040   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:56.302082   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:56.343102   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:56.343150   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:56.209085   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.712250   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:57.056463   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:59.555744   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:56.850242   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.850955   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.896551   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:58.909034   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:58.909110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:58.944520   86402 cri.go:89] found id: ""
	I1104 12:10:58.944550   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.944559   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:58.944565   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:58.944612   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:58.980137   86402 cri.go:89] found id: ""
	I1104 12:10:58.980167   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.980176   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:58.980181   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:58.980231   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:59.014505   86402 cri.go:89] found id: ""
	I1104 12:10:59.014536   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.014545   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:59.014551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:59.014602   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:59.050616   86402 cri.go:89] found id: ""
	I1104 12:10:59.050642   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.050652   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:59.050659   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:59.050718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:59.084328   86402 cri.go:89] found id: ""
	I1104 12:10:59.084358   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.084369   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:59.084376   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:59.084449   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:59.116607   86402 cri.go:89] found id: ""
	I1104 12:10:59.116633   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.116642   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:59.116649   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:59.116711   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:59.149727   86402 cri.go:89] found id: ""
	I1104 12:10:59.149754   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.149765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:59.149773   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:59.149832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:59.182992   86402 cri.go:89] found id: ""
	I1104 12:10:59.183023   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.183035   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:59.183045   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:59.183059   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:59.234826   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:59.234862   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:59.248401   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:59.248427   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:59.317143   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:59.317171   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:59.317186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:59.397294   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:59.397336   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:01.208022   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.707297   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.556680   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:04.055902   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.350865   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.850510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.933617   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:01.946458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:01.946537   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:01.981652   86402 cri.go:89] found id: ""
	I1104 12:11:01.981682   86402 logs.go:282] 0 containers: []
	W1104 12:11:01.981693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:01.981701   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:01.981757   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:02.014245   86402 cri.go:89] found id: ""
	I1104 12:11:02.014273   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.014282   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:02.014287   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:02.014350   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:02.047386   86402 cri.go:89] found id: ""
	I1104 12:11:02.047409   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.047420   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:02.047427   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:02.047488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:02.086427   86402 cri.go:89] found id: ""
	I1104 12:11:02.086464   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.086475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:02.086483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:02.086544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:02.120219   86402 cri.go:89] found id: ""
	I1104 12:11:02.120246   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.120255   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:02.120260   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:02.120318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:02.153832   86402 cri.go:89] found id: ""
	I1104 12:11:02.153864   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.153876   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:02.153884   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:02.153950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:02.186237   86402 cri.go:89] found id: ""
	I1104 12:11:02.186266   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.186278   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:02.186285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:02.186351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:02.219238   86402 cri.go:89] found id: ""
	I1104 12:11:02.219269   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.219280   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:02.219290   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:02.219301   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:02.301062   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:02.301099   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:02.358585   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:02.358617   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:02.414153   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:02.414200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:02.428429   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:02.428456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:02.497040   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:04.998089   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:05.010890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:05.010947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:05.046483   86402 cri.go:89] found id: ""
	I1104 12:11:05.046513   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.046523   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:05.046534   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:05.046594   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:05.079487   86402 cri.go:89] found id: ""
	I1104 12:11:05.079516   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.079527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:05.079535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:05.079595   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:05.110968   86402 cri.go:89] found id: ""
	I1104 12:11:05.110997   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.111004   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:05.111010   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:05.111057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:05.143372   86402 cri.go:89] found id: ""
	I1104 12:11:05.143398   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.143408   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:05.143415   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:05.143484   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:05.174691   86402 cri.go:89] found id: ""
	I1104 12:11:05.174717   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.174730   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:05.174737   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:05.174802   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:05.210005   86402 cri.go:89] found id: ""
	I1104 12:11:05.210025   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.210033   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:05.210041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:05.210085   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:05.244874   86402 cri.go:89] found id: ""
	I1104 12:11:05.244899   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.244908   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:05.244913   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:05.244956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:05.276517   86402 cri.go:89] found id: ""
	I1104 12:11:05.276547   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.276557   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:05.276568   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:05.276581   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:05.354057   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:05.354087   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:05.390848   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:05.390887   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:05.442659   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:05.442692   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:05.456290   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:05.456315   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:05.530310   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:06.207301   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.208333   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.056314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.556910   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.350241   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.350774   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:10.351274   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.030545   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:08.043598   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:08.043654   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:08.081604   86402 cri.go:89] found id: ""
	I1104 12:11:08.081634   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.081644   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:08.081652   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:08.081712   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:08.135357   86402 cri.go:89] found id: ""
	I1104 12:11:08.135388   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.135398   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:08.135405   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:08.135470   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:08.173275   86402 cri.go:89] found id: ""
	I1104 12:11:08.173298   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.173306   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:08.173311   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:08.173371   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:08.213415   86402 cri.go:89] found id: ""
	I1104 12:11:08.213439   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.213448   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:08.213454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:08.213507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:08.244759   86402 cri.go:89] found id: ""
	I1104 12:11:08.244791   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.244802   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:08.244809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:08.244870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:08.276643   86402 cri.go:89] found id: ""
	I1104 12:11:08.276666   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.276675   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:08.276682   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:08.276751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:08.308425   86402 cri.go:89] found id: ""
	I1104 12:11:08.308451   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.308462   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:08.308469   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:08.308527   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:08.340645   86402 cri.go:89] found id: ""
	I1104 12:11:08.340675   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.340687   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:08.340698   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:08.340712   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:08.413171   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.413196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:08.413214   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:08.496208   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:08.496246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:08.534527   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:08.534560   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:08.583515   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:08.583550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:11.099000   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:11.112158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:11.112236   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:11.145718   86402 cri.go:89] found id: ""
	I1104 12:11:11.145748   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.145758   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:11.145765   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:11.145958   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:11.177270   86402 cri.go:89] found id: ""
	I1104 12:11:11.177301   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.177317   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:11.177325   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:11.177396   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:11.209696   86402 cri.go:89] found id: ""
	I1104 12:11:11.209722   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.209737   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:11.209742   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:11.209789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:11.244034   86402 cri.go:89] found id: ""
	I1104 12:11:11.244061   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.244069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:11.244078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:11.244135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:11.276437   86402 cri.go:89] found id: ""
	I1104 12:11:11.276462   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.276470   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:11.276476   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:11.276530   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:11.308954   86402 cri.go:89] found id: ""
	I1104 12:11:11.308980   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.308988   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:11.308994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:11.309057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:11.342175   86402 cri.go:89] found id: ""
	I1104 12:11:11.342199   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.342207   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:11.342211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:11.342266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:11.374810   86402 cri.go:89] found id: ""
	I1104 12:11:11.374839   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.374851   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:11.374860   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:11.374875   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:11.443638   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:11.443667   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:11.443681   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:11.526996   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:11.527031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:11.568297   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:11.568325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:11.616229   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:11.616264   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:10.707934   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.708053   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:11.055469   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:13.055645   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.057348   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.851253   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.350857   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:14.130707   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:14.143045   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:14.143116   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:14.185422   86402 cri.go:89] found id: ""
	I1104 12:11:14.185461   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.185471   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:14.185477   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:14.185525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:14.219890   86402 cri.go:89] found id: ""
	I1104 12:11:14.219918   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.219928   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:14.219938   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:14.219985   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:14.253256   86402 cri.go:89] found id: ""
	I1104 12:11:14.253286   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.253296   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:14.253304   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:14.253364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:14.286228   86402 cri.go:89] found id: ""
	I1104 12:11:14.286259   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.286271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:14.286279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:14.286342   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:14.317065   86402 cri.go:89] found id: ""
	I1104 12:11:14.317091   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.317101   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:14.317106   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:14.317168   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:14.348540   86402 cri.go:89] found id: ""
	I1104 12:11:14.348575   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.348583   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:14.348589   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:14.348647   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:14.380824   86402 cri.go:89] found id: ""
	I1104 12:11:14.380849   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.380858   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:14.380863   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:14.380924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:14.413757   86402 cri.go:89] found id: ""
	I1104 12:11:14.413785   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.413796   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:14.413806   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:14.413822   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:14.479311   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:14.479336   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:14.479349   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:14.572923   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:14.572959   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:14.620277   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:14.620359   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:14.674276   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:14.674310   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:15.208704   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.708523   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.555941   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.556233   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.351751   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.851087   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.187062   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:17.200179   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:17.200260   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:17.232208   86402 cri.go:89] found id: ""
	I1104 12:11:17.232231   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.232238   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:17.232244   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:17.232298   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:17.266224   86402 cri.go:89] found id: ""
	I1104 12:11:17.266248   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.266257   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:17.266262   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:17.266320   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:17.301909   86402 cri.go:89] found id: ""
	I1104 12:11:17.301940   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.301948   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:17.301953   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:17.302005   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:17.339493   86402 cri.go:89] found id: ""
	I1104 12:11:17.339517   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.339530   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:17.339537   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:17.339600   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:17.373879   86402 cri.go:89] found id: ""
	I1104 12:11:17.373927   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.373938   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:17.373945   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:17.373996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:17.405533   86402 cri.go:89] found id: ""
	I1104 12:11:17.405562   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.405573   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:17.405583   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:17.405645   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:17.439421   86402 cri.go:89] found id: ""
	I1104 12:11:17.439451   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.439460   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:17.439468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:17.439532   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:17.474573   86402 cri.go:89] found id: ""
	I1104 12:11:17.474602   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.474613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:17.474623   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:17.474636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:17.524497   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:17.524536   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.538421   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:17.538460   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:17.607299   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:17.607323   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:17.607337   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:17.684181   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:17.684224   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.223600   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:20.237793   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:20.237865   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:20.279656   86402 cri.go:89] found id: ""
	I1104 12:11:20.279682   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.279693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:20.279700   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:20.279767   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:20.337980   86402 cri.go:89] found id: ""
	I1104 12:11:20.338009   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.338020   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:20.338027   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:20.338087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:20.383183   86402 cri.go:89] found id: ""
	I1104 12:11:20.383217   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.383226   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:20.383231   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:20.383282   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:20.416470   86402 cri.go:89] found id: ""
	I1104 12:11:20.416495   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.416505   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:20.416512   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:20.416570   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:20.451968   86402 cri.go:89] found id: ""
	I1104 12:11:20.452000   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.452011   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:20.452017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:20.452074   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:20.484800   86402 cri.go:89] found id: ""
	I1104 12:11:20.484823   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.484831   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:20.484837   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:20.484893   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:20.516263   86402 cri.go:89] found id: ""
	I1104 12:11:20.516292   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.516300   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:20.516306   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:20.516364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:20.548616   86402 cri.go:89] found id: ""
	I1104 12:11:20.548640   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.548651   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:20.548661   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:20.548674   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:20.599338   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:20.599368   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:20.613116   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:20.613148   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:20.678898   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:20.678924   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:20.678936   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:20.757570   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:20.757606   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.206649   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.207379   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.207579   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.056670   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.555101   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.350891   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.351318   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:23.293912   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:23.307037   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:23.307110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:23.341161   86402 cri.go:89] found id: ""
	I1104 12:11:23.341186   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.341195   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:23.341200   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:23.341277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:23.373462   86402 cri.go:89] found id: ""
	I1104 12:11:23.373491   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.373503   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:23.373510   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:23.373568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:23.404439   86402 cri.go:89] found id: ""
	I1104 12:11:23.404471   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.404482   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:23.404489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:23.404548   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:23.435224   86402 cri.go:89] found id: ""
	I1104 12:11:23.435256   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.435267   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:23.435274   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:23.435336   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:23.472593   86402 cri.go:89] found id: ""
	I1104 12:11:23.472622   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.472633   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:23.472641   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:23.472693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:23.503413   86402 cri.go:89] found id: ""
	I1104 12:11:23.503438   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.503447   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:23.503454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:23.503516   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:23.537582   86402 cri.go:89] found id: ""
	I1104 12:11:23.537610   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.537621   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:23.537628   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:23.537689   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:23.573799   86402 cri.go:89] found id: ""
	I1104 12:11:23.573824   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.573831   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:23.573838   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:23.573851   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:23.649239   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:23.649273   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.686518   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:23.686548   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:23.738955   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:23.738987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:23.751909   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:23.751935   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:23.827244   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.327902   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:26.339708   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:26.339784   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:26.369615   86402 cri.go:89] found id: ""
	I1104 12:11:26.369644   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.369653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:26.369659   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:26.369715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:26.402027   86402 cri.go:89] found id: ""
	I1104 12:11:26.402056   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.402065   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:26.402070   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:26.402123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:26.433483   86402 cri.go:89] found id: ""
	I1104 12:11:26.433512   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.433523   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:26.433529   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:26.433637   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:26.466403   86402 cri.go:89] found id: ""
	I1104 12:11:26.466442   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.466453   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:26.466468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:26.466524   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:26.499818   86402 cri.go:89] found id: ""
	I1104 12:11:26.499853   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.499864   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:26.499871   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:26.499930   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:26.537782   86402 cri.go:89] found id: ""
	I1104 12:11:26.537809   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.537822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:26.537830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:26.537890   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:26.574091   86402 cri.go:89] found id: ""
	I1104 12:11:26.574120   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.574131   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:26.574138   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:26.574199   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:26.607554   86402 cri.go:89] found id: ""
	I1104 12:11:26.607584   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.607596   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:26.607606   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:26.607620   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:26.657405   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:26.657443   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:26.670022   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:26.670046   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:11:26.707958   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.207380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.556568   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:28.557276   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.852761   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.351303   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	W1104 12:11:26.736238   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.736266   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:26.736278   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:26.816277   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:26.816309   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:29.357639   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:29.371116   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:29.371204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:29.405569   86402 cri.go:89] found id: ""
	I1104 12:11:29.405595   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.405604   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:29.405611   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:29.405668   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:29.435669   86402 cri.go:89] found id: ""
	I1104 12:11:29.435697   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.435709   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:29.435716   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:29.435781   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:29.476208   86402 cri.go:89] found id: ""
	I1104 12:11:29.476236   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.476245   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:29.476251   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:29.476305   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:29.511446   86402 cri.go:89] found id: ""
	I1104 12:11:29.511474   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.511483   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:29.511489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:29.511541   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:29.543714   86402 cri.go:89] found id: ""
	I1104 12:11:29.543742   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.543754   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:29.543761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:29.543840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:29.577429   86402 cri.go:89] found id: ""
	I1104 12:11:29.577456   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.577466   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:29.577473   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:29.577534   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:29.608430   86402 cri.go:89] found id: ""
	I1104 12:11:29.608457   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.608475   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:29.608483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:29.608539   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:29.640029   86402 cri.go:89] found id: ""
	I1104 12:11:29.640057   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.640068   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:29.640078   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:29.640092   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:29.691170   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:29.691202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:29.704949   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:29.704987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:29.766856   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:29.766884   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:29.766898   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:29.847487   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:29.847525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:31.208725   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.709593   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:30.557500   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.056569   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:31.851101   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:34.350356   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:32.382925   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:32.395889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:32.395943   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:32.428711   86402 cri.go:89] found id: ""
	I1104 12:11:32.428736   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.428749   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:32.428755   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:32.428810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:32.463269   86402 cri.go:89] found id: ""
	I1104 12:11:32.463295   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.463307   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:32.463313   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:32.463372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:32.496098   86402 cri.go:89] found id: ""
	I1104 12:11:32.496125   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.496135   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:32.496142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:32.496213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:32.528729   86402 cri.go:89] found id: ""
	I1104 12:11:32.528760   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.528771   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:32.528778   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:32.528860   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:32.567290   86402 cri.go:89] found id: ""
	I1104 12:11:32.567321   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.567332   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:32.567338   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:32.567397   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:32.608932   86402 cri.go:89] found id: ""
	I1104 12:11:32.608962   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.608973   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:32.608980   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:32.609037   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:32.641128   86402 cri.go:89] found id: ""
	I1104 12:11:32.641155   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.641164   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:32.641171   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:32.641239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:32.675651   86402 cri.go:89] found id: ""
	I1104 12:11:32.675682   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.675694   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:32.675704   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:32.675719   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:32.742369   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:32.742406   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:32.742419   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:32.823371   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:32.823412   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.862243   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:32.862270   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:32.910961   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:32.910987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.425742   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:35.438553   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:35.438615   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:35.475160   86402 cri.go:89] found id: ""
	I1104 12:11:35.475189   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.475201   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:35.475209   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:35.475267   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:35.517193   86402 cri.go:89] found id: ""
	I1104 12:11:35.517239   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.517252   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:35.517260   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:35.517329   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:35.552941   86402 cri.go:89] found id: ""
	I1104 12:11:35.552967   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.552978   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:35.552985   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:35.553056   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:35.589960   86402 cri.go:89] found id: ""
	I1104 12:11:35.589983   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.589994   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:35.590001   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:35.590063   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:35.624546   86402 cri.go:89] found id: ""
	I1104 12:11:35.624575   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.624587   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:35.624595   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:35.624655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:35.657855   86402 cri.go:89] found id: ""
	I1104 12:11:35.657885   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.657896   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:35.657903   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:35.657957   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:35.691465   86402 cri.go:89] found id: ""
	I1104 12:11:35.691498   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.691509   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:35.691516   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:35.691587   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:35.727520   86402 cri.go:89] found id: ""
	I1104 12:11:35.727548   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.727558   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:35.727569   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:35.727584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:35.777876   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:35.777912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.790790   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:35.790817   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:35.856780   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:35.856805   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:35.856819   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:35.936769   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:35.936812   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:36.207096   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.707776   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:35.556694   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.055778   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:36.850946   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:39.350058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.474827   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:38.488151   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:38.488221   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:38.523010   86402 cri.go:89] found id: ""
	I1104 12:11:38.523042   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.523053   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:38.523061   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:38.523117   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:38.558065   86402 cri.go:89] found id: ""
	I1104 12:11:38.558093   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.558102   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:38.558107   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:38.558153   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:38.590676   86402 cri.go:89] found id: ""
	I1104 12:11:38.590704   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.590715   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:38.590723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:38.590780   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:38.623762   86402 cri.go:89] found id: ""
	I1104 12:11:38.623793   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.623804   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:38.623811   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:38.623870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:38.655918   86402 cri.go:89] found id: ""
	I1104 12:11:38.655947   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.655958   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:38.655966   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:38.656028   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:38.691200   86402 cri.go:89] found id: ""
	I1104 12:11:38.691228   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.691238   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:38.691245   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:38.691302   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:38.724725   86402 cri.go:89] found id: ""
	I1104 12:11:38.724748   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.724756   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:38.724761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:38.724819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:38.756333   86402 cri.go:89] found id: ""
	I1104 12:11:38.756360   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.756370   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:38.756381   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:38.756395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:38.807722   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:38.807756   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:38.821055   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:38.821079   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:38.886629   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:38.886656   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:38.886671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:38.960958   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:38.960999   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:41.503471   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:41.515994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:41.516065   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:41.549936   86402 cri.go:89] found id: ""
	I1104 12:11:41.549960   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.549968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:41.549975   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:41.550033   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:41.584565   86402 cri.go:89] found id: ""
	I1104 12:11:41.584590   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.584602   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:41.584610   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:41.584660   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:41.616427   86402 cri.go:89] found id: ""
	I1104 12:11:41.616450   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.616458   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:41.616463   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:41.616510   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:41.650835   86402 cri.go:89] found id: ""
	I1104 12:11:41.650864   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.650875   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:41.650882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:41.650946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:40.707926   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.207969   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:40.555616   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:42.555839   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:44.556749   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.351131   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.851925   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.685899   86402 cri.go:89] found id: ""
	I1104 12:11:41.685921   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.685928   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:41.685934   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:41.685979   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:41.718730   86402 cri.go:89] found id: ""
	I1104 12:11:41.718757   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.718773   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:41.718782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:41.718837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:41.748843   86402 cri.go:89] found id: ""
	I1104 12:11:41.748875   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.748887   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:41.748895   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:41.748963   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:41.780225   86402 cri.go:89] found id: ""
	I1104 12:11:41.780251   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.780260   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:41.780268   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:41.780285   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:41.830864   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:41.830893   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:41.844252   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:41.844279   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:41.908514   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:41.908542   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:41.908554   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:41.988545   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:41.988582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:44.527641   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:44.540026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:44.540108   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:44.574530   86402 cri.go:89] found id: ""
	I1104 12:11:44.574559   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.574570   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:44.574577   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:44.574638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:44.606073   86402 cri.go:89] found id: ""
	I1104 12:11:44.606103   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.606114   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:44.606121   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:44.606185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:44.639750   86402 cri.go:89] found id: ""
	I1104 12:11:44.639775   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.639784   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:44.639792   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:44.639850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:44.673528   86402 cri.go:89] found id: ""
	I1104 12:11:44.673557   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.673565   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:44.673573   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:44.673625   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:44.705928   86402 cri.go:89] found id: ""
	I1104 12:11:44.705956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.705966   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:44.705973   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:44.706032   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:44.736779   86402 cri.go:89] found id: ""
	I1104 12:11:44.736811   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.736822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:44.736830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:44.736886   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:44.769929   86402 cri.go:89] found id: ""
	I1104 12:11:44.769956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.769964   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:44.769970   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:44.770015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:44.800818   86402 cri.go:89] found id: ""
	I1104 12:11:44.800846   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.800855   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:44.800863   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:44.800873   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:44.853610   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:44.853641   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:44.866656   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:44.866683   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:44.936386   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:44.936412   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:44.936425   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:45.011789   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:45.011823   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:45.707030   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.707464   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.711329   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.557112   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.055647   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.351055   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:48.850134   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:50.851867   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.548672   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:47.563082   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:47.563157   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:47.598722   86402 cri.go:89] found id: ""
	I1104 12:11:47.598748   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.598756   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:47.598762   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:47.598809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:47.633376   86402 cri.go:89] found id: ""
	I1104 12:11:47.633412   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.633421   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:47.633428   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:47.633486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:47.666059   86402 cri.go:89] found id: ""
	I1104 12:11:47.666087   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.666095   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:47.666101   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:47.666147   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:47.700659   86402 cri.go:89] found id: ""
	I1104 12:11:47.700690   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.700704   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:47.700711   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:47.700771   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:47.732901   86402 cri.go:89] found id: ""
	I1104 12:11:47.732927   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.732934   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:47.732940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:47.732984   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:47.765371   86402 cri.go:89] found id: ""
	I1104 12:11:47.765398   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.765418   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:47.765425   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:47.765487   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:47.797043   86402 cri.go:89] found id: ""
	I1104 12:11:47.797077   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.797089   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:47.797096   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:47.797159   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:47.828140   86402 cri.go:89] found id: ""
	I1104 12:11:47.828172   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.828184   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:47.828194   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:47.828208   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:47.911398   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:47.911434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.948042   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:47.948071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:47.999603   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:47.999638   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:48.013818   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:48.013856   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:48.082679   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.583325   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:50.595272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:50.595346   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:50.630857   86402 cri.go:89] found id: ""
	I1104 12:11:50.630883   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.630892   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:50.630899   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:50.630965   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:50.663025   86402 cri.go:89] found id: ""
	I1104 12:11:50.663049   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.663058   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:50.663063   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:50.663109   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:50.695371   86402 cri.go:89] found id: ""
	I1104 12:11:50.695402   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.695413   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:50.695421   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:50.695480   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:50.728805   86402 cri.go:89] found id: ""
	I1104 12:11:50.728827   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.728836   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:50.728841   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:50.728902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:50.762837   86402 cri.go:89] found id: ""
	I1104 12:11:50.762868   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.762878   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:50.762885   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:50.762941   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:50.802531   86402 cri.go:89] found id: ""
	I1104 12:11:50.802556   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.802564   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:50.802569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:50.802613   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:50.835124   86402 cri.go:89] found id: ""
	I1104 12:11:50.835161   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.835173   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:50.835180   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:50.835234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:50.869265   86402 cri.go:89] found id: ""
	I1104 12:11:50.869295   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.869308   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:50.869318   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:50.869330   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:50.919371   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:50.919405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:50.932165   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:50.932195   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:50.993935   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.993959   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:50.993972   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:51.071816   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:51.071848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:52.208101   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:54.707467   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:51.056129   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.057025   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.349902   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.350302   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.608347   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:53.620842   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:53.620902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:53.652870   86402 cri.go:89] found id: ""
	I1104 12:11:53.652896   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.652909   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:53.652917   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:53.652980   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:53.684842   86402 cri.go:89] found id: ""
	I1104 12:11:53.684878   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.684889   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:53.684897   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:53.684956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:53.722505   86402 cri.go:89] found id: ""
	I1104 12:11:53.722531   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.722539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:53.722544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:53.722603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:53.753831   86402 cri.go:89] found id: ""
	I1104 12:11:53.753858   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.753866   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:53.753872   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:53.753918   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:53.786112   86402 cri.go:89] found id: ""
	I1104 12:11:53.786139   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.786150   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:53.786157   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:53.786218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:53.820446   86402 cri.go:89] found id: ""
	I1104 12:11:53.820472   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.820487   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:53.820493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:53.820552   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:53.855631   86402 cri.go:89] found id: ""
	I1104 12:11:53.855655   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.855665   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:53.855673   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:53.855727   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:53.887953   86402 cri.go:89] found id: ""
	I1104 12:11:53.887983   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.887994   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:53.888004   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:53.888023   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:53.954408   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:53.954430   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:53.954442   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:54.028549   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:54.028584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:54.070869   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:54.070895   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:54.123676   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:54.123715   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:56.639480   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:56.652651   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:56.652709   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:56.708211   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.555992   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:58.056271   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:57.350474   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.850830   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:56.689397   86402 cri.go:89] found id: ""
	I1104 12:11:56.689425   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.689443   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:56.689452   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:56.689517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:56.725197   86402 cri.go:89] found id: ""
	I1104 12:11:56.725234   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.725246   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:56.725254   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:56.725308   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:56.759043   86402 cri.go:89] found id: ""
	I1104 12:11:56.759073   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.759084   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:56.759090   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:56.759141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:56.792268   86402 cri.go:89] found id: ""
	I1104 12:11:56.792296   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.792307   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:56.792314   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:56.792375   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:56.823668   86402 cri.go:89] found id: ""
	I1104 12:11:56.823692   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.823702   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:56.823709   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:56.823769   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:56.861812   86402 cri.go:89] found id: ""
	I1104 12:11:56.861837   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.861845   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:56.861851   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:56.861902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:56.894037   86402 cri.go:89] found id: ""
	I1104 12:11:56.894067   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.894075   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:56.894080   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:56.894133   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:56.925603   86402 cri.go:89] found id: ""
	I1104 12:11:56.925634   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.925646   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:56.925656   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:56.925669   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:56.961504   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:56.961530   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:57.012666   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:57.012700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:57.025887   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:57.025921   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:57.097219   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:57.097257   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:57.097272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:59.671179   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:59.684642   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:59.684718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:59.721599   86402 cri.go:89] found id: ""
	I1104 12:11:59.721622   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.721631   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:59.721640   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:59.721693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:59.757423   86402 cri.go:89] found id: ""
	I1104 12:11:59.757453   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.757461   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:59.757466   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:59.757525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:59.794036   86402 cri.go:89] found id: ""
	I1104 12:11:59.794071   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.794081   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:59.794089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:59.794148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:59.830098   86402 cri.go:89] found id: ""
	I1104 12:11:59.830123   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.830134   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:59.830142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:59.830207   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:59.867791   86402 cri.go:89] found id: ""
	I1104 12:11:59.867815   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.867823   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:59.867828   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:59.867879   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:59.903579   86402 cri.go:89] found id: ""
	I1104 12:11:59.903607   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.903614   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:59.903620   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:59.903667   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:59.940955   86402 cri.go:89] found id: ""
	I1104 12:11:59.940977   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.940984   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:59.940989   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:59.941034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:59.977626   86402 cri.go:89] found id: ""
	I1104 12:11:59.977653   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.977663   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:59.977674   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:59.977687   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:00.032280   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:00.032312   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:00.045965   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:00.045991   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:00.123578   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:00.123608   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:00.123625   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:00.208309   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:00.208340   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:01.707661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.207140   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:00.555683   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:02.555797   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.556558   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851646   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851680   85759 pod_ready.go:82] duration metric: took 4m0.007179751s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:01.851691   85759 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:01.851701   85759 pod_ready.go:39] duration metric: took 4m4.052369029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:01.851721   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:01.851752   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:01.851805   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:01.891029   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:01.891056   85759 cri.go:89] found id: ""
	I1104 12:12:01.891066   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:01.891128   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.895134   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:01.895243   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:01.928058   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:01.928081   85759 cri.go:89] found id: ""
	I1104 12:12:01.928089   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:01.928134   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.931906   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:01.931974   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:01.972023   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:01.972052   85759 cri.go:89] found id: ""
	I1104 12:12:01.972062   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:01.972116   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.980811   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:01.980894   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.024013   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.024038   85759 cri.go:89] found id: ""
	I1104 12:12:02.024046   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:02.024108   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.028571   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.028641   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.063545   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:02.063570   85759 cri.go:89] found id: ""
	I1104 12:12:02.063580   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:02.063635   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.067582   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.067652   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.100954   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.100979   85759 cri.go:89] found id: ""
	I1104 12:12:02.100989   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:02.101038   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.105121   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.105182   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.137206   85759 cri.go:89] found id: ""
	I1104 12:12:02.137249   85759 logs.go:282] 0 containers: []
	W1104 12:12:02.137260   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.137268   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:02.137317   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:02.171499   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:02.171520   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.171526   85759 cri.go:89] found id: ""
	I1104 12:12:02.171535   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:02.171587   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.175646   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.179066   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:02.179084   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.249087   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:02.249126   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:02.262666   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:02.262692   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:02.316826   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:02.316856   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.351741   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:02.351766   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.400377   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:02.400409   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.448029   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:02.448059   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:02.975331   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:02.975367   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:03.026697   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.026739   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:03.140704   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:03.140753   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:03.192394   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:03.192427   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:03.236040   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:03.236071   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:03.275166   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:03.275194   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:05.813333   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.827697   85759 api_server.go:72] duration metric: took 4m15.741105379s to wait for apiserver process to appear ...
	I1104 12:12:05.827725   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:05.827763   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.827826   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.869552   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:05.869580   85759 cri.go:89] found id: ""
	I1104 12:12:05.869590   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:05.869642   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.873890   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.873954   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.914131   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:05.914153   85759 cri.go:89] found id: ""
	I1104 12:12:05.914161   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:05.914216   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.920980   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.921042   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.960930   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:05.960953   85759 cri.go:89] found id: ""
	I1104 12:12:05.960962   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:05.961018   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.965591   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.965653   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:06.000500   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:06.000520   85759 cri.go:89] found id: ""
	I1104 12:12:06.000526   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:06.000576   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.004775   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:06.004835   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:06.042011   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:06.042032   85759 cri.go:89] found id: ""
	I1104 12:12:06.042041   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:06.042102   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.047885   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:06.047952   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.084318   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:06.084341   85759 cri.go:89] found id: ""
	I1104 12:12:06.084349   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:06.084410   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.088487   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.088564   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.127693   85759 cri.go:89] found id: ""
	I1104 12:12:06.127721   85759 logs.go:282] 0 containers: []
	W1104 12:12:06.127730   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.127736   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:06.127780   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:06.165173   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.165199   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.165206   85759 cri.go:89] found id: ""
	I1104 12:12:06.165215   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:06.165302   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.169479   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.173154   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.173177   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.746303   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:02.758892   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:02.758967   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:02.792775   86402 cri.go:89] found id: ""
	I1104 12:12:02.792803   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.792815   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:02.792822   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:02.792878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:02.831073   86402 cri.go:89] found id: ""
	I1104 12:12:02.831097   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.831108   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:02.831115   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:02.831174   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:02.863530   86402 cri.go:89] found id: ""
	I1104 12:12:02.863557   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.863568   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:02.863574   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:02.863641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.894894   86402 cri.go:89] found id: ""
	I1104 12:12:02.894924   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.894934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:02.894942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.894996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.930052   86402 cri.go:89] found id: ""
	I1104 12:12:02.930081   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.930092   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:02.930100   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.930160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.964503   86402 cri.go:89] found id: ""
	I1104 12:12:02.964532   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.964544   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:02.964551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.964610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.998065   86402 cri.go:89] found id: ""
	I1104 12:12:02.998088   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.998096   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.998102   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:02.998148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:03.033579   86402 cri.go:89] found id: ""
	I1104 12:12:03.033604   86402 logs.go:282] 0 containers: []
	W1104 12:12:03.033613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:03.033621   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:03.033630   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:03.086215   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:03.086249   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:03.100100   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.100136   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:03.168116   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:03.168150   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:03.168165   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:03.253608   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:03.253642   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:05.792913   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.806494   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.806568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.854379   86402 cri.go:89] found id: ""
	I1104 12:12:05.854406   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.854417   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:05.854425   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.854503   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.886144   86402 cri.go:89] found id: ""
	I1104 12:12:05.886169   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.886179   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:05.886186   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.886248   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.917462   86402 cri.go:89] found id: ""
	I1104 12:12:05.917482   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.917492   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:05.917499   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.917550   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:05.954065   86402 cri.go:89] found id: ""
	I1104 12:12:05.954099   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.954110   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:05.954120   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:05.954194   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:05.990935   86402 cri.go:89] found id: ""
	I1104 12:12:05.990966   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.990977   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:05.990984   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:05.991050   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.032175   86402 cri.go:89] found id: ""
	I1104 12:12:06.032198   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.032206   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:06.032211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.032269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.069215   86402 cri.go:89] found id: ""
	I1104 12:12:06.069262   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.069275   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.069282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:06.069340   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:06.103065   86402 cri.go:89] found id: ""
	I1104 12:12:06.103106   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.103117   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:06.103127   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.103145   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:06.184111   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:06.184135   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.184149   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.272720   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.272760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.315596   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.315636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:06.376054   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.376110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.214237   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:08.707098   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:07.056531   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:09.056763   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:06.252295   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:06.252326   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:06.302739   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:06.302769   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:06.361279   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.361307   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.811335   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.811380   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.851356   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:06.851387   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.888753   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:06.888789   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.922406   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.922438   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.935028   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.935057   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:07.033975   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:07.034019   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:07.068680   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:07.068706   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:07.105150   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:07.105182   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:07.139258   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:07.139290   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.695630   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:12:09.701156   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:12:09.702414   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:09.702441   85759 api_server.go:131] duration metric: took 3.874707524s to wait for apiserver health ...
	I1104 12:12:09.702451   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:09.702475   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:09.702530   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:09.736867   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:09.736891   85759 cri.go:89] found id: ""
	I1104 12:12:09.736901   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:09.736956   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.741108   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:09.741176   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:09.780460   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:09.780483   85759 cri.go:89] found id: ""
	I1104 12:12:09.780490   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:09.780535   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.784698   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:09.784756   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:09.823042   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:09.823059   85759 cri.go:89] found id: ""
	I1104 12:12:09.823068   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:09.823123   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.826750   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:09.826803   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.859148   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:09.859175   85759 cri.go:89] found id: ""
	I1104 12:12:09.859185   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:09.859245   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.863676   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.863739   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.901737   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:09.901766   85759 cri.go:89] found id: ""
	I1104 12:12:09.901783   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:09.901843   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.905931   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.905993   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.942617   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.942637   85759 cri.go:89] found id: ""
	I1104 12:12:09.942644   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:09.942687   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.946420   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.946481   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.984891   85759 cri.go:89] found id: ""
	I1104 12:12:09.984921   85759 logs.go:282] 0 containers: []
	W1104 12:12:09.984932   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.984939   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:09.985000   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:10.018332   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.018357   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.018363   85759 cri.go:89] found id: ""
	I1104 12:12:10.018374   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:10.018434   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.022995   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.026853   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:10.026878   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:10.083384   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:10.083421   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:10.136576   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:10.136608   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.182808   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:10.182837   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.217017   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:10.217047   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:10.598972   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:10.599010   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:10.638827   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:10.638868   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:10.652880   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:10.652923   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:10.700645   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:10.700675   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:10.734860   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:10.734890   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:10.774613   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:10.774647   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:10.808375   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:10.808403   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:10.876130   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:10.876165   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:08.890463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:08.904272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:08.904354   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:08.935677   86402 cri.go:89] found id: ""
	I1104 12:12:08.935701   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.935710   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:08.935715   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:08.935761   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:08.966969   86402 cri.go:89] found id: ""
	I1104 12:12:08.966993   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.967004   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:08.967011   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:08.967072   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:08.998753   86402 cri.go:89] found id: ""
	I1104 12:12:08.998778   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.998786   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:08.998790   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:08.998852   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.031901   86402 cri.go:89] found id: ""
	I1104 12:12:09.031925   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.031934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:09.031940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.032000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.071478   86402 cri.go:89] found id: ""
	I1104 12:12:09.071500   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.071508   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:09.071513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.071564   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.107593   86402 cri.go:89] found id: ""
	I1104 12:12:09.107621   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.107629   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:09.107635   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.107693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.140899   86402 cri.go:89] found id: ""
	I1104 12:12:09.140923   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.140934   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.140942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:09.141000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:09.174279   86402 cri.go:89] found id: ""
	I1104 12:12:09.174307   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.174318   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:09.174330   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:09.174405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:09.226340   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:09.226371   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:09.239573   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:09.239600   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:09.306180   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:09.306201   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:09.306212   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:09.385039   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:09.385072   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:13.475909   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:13.475946   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.475954   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.475960   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.475965   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.475970   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.475975   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.475985   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.475994   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.476008   85759 system_pods.go:74] duration metric: took 3.773548162s to wait for pod list to return data ...
	I1104 12:12:13.476020   85759 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:13.478598   85759 default_sa.go:45] found service account: "default"
	I1104 12:12:13.478618   85759 default_sa.go:55] duration metric: took 2.591186ms for default service account to be created ...
	I1104 12:12:13.478628   85759 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:13.483285   85759 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:13.483308   85759 system_pods.go:89] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.483314   85759 system_pods.go:89] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.483318   85759 system_pods.go:89] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.483322   85759 system_pods.go:89] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.483325   85759 system_pods.go:89] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.483329   85759 system_pods.go:89] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.483336   85759 system_pods.go:89] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.483340   85759 system_pods.go:89] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.483347   85759 system_pods.go:126] duration metric: took 4.713256ms to wait for k8s-apps to be running ...
	I1104 12:12:13.483355   85759 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:13.483398   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:13.497748   85759 system_svc.go:56] duration metric: took 14.381722ms WaitForService to wait for kubelet
	I1104 12:12:13.497812   85759 kubeadm.go:582] duration metric: took 4m23.411218278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:13.497843   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:13.500813   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:13.500833   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:13.500843   85759 node_conditions.go:105] duration metric: took 2.993771ms to run NodePressure ...
	I1104 12:12:13.500854   85759 start.go:241] waiting for startup goroutines ...
	I1104 12:12:13.500860   85759 start.go:246] waiting for cluster config update ...
	I1104 12:12:13.500870   85759 start.go:255] writing updated cluster config ...
	I1104 12:12:13.501122   85759 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:13.548293   85759 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:13.550203   85759 out.go:177] * Done! kubectl is now configured to use "embed-certs-325116" cluster and "default" namespace by default
	I1104 12:12:10.707746   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:12.708477   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.555266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:13.555498   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.924105   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:11.936623   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:11.936685   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:11.968026   86402 cri.go:89] found id: ""
	I1104 12:12:11.968056   86402 logs.go:282] 0 containers: []
	W1104 12:12:11.968067   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:11.968074   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:11.968139   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:12.001193   86402 cri.go:89] found id: ""
	I1104 12:12:12.001218   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.001245   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:12.001252   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:12.001311   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:12.035167   86402 cri.go:89] found id: ""
	I1104 12:12:12.035190   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.035199   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:12.035204   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:12.035250   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:12.068412   86402 cri.go:89] found id: ""
	I1104 12:12:12.068440   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.068450   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:12.068458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:12.068515   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:12.099965   86402 cri.go:89] found id: ""
	I1104 12:12:12.099991   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.100002   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:12.100009   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:12.100066   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:12.133413   86402 cri.go:89] found id: ""
	I1104 12:12:12.133442   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.133453   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:12.133460   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:12.133520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:12.169007   86402 cri.go:89] found id: ""
	I1104 12:12:12.169036   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.169046   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:12.169053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:12.169112   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:12.200592   86402 cri.go:89] found id: ""
	I1104 12:12:12.200621   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.200635   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:12.200643   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:12.200657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:12.244609   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:12.244644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:12.299770   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:12.299804   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:12.324354   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:12.324395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:12.385605   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:12.385632   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:12.385661   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:14.964867   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:14.977918   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:14.977991   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:15.012865   86402 cri.go:89] found id: ""
	I1104 12:12:15.012894   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.012906   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:15.012913   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:15.012977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:15.046548   86402 cri.go:89] found id: ""
	I1104 12:12:15.046574   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.046583   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:15.046589   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:15.046636   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:15.079310   86402 cri.go:89] found id: ""
	I1104 12:12:15.079336   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.079347   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:15.079353   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:15.079412   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:15.110595   86402 cri.go:89] found id: ""
	I1104 12:12:15.110625   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.110636   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:15.110648   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:15.110716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:15.143362   86402 cri.go:89] found id: ""
	I1104 12:12:15.143391   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.143403   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:15.143410   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:15.143533   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:15.173973   86402 cri.go:89] found id: ""
	I1104 12:12:15.174000   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.174009   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:15.174017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:15.174081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:15.205021   86402 cri.go:89] found id: ""
	I1104 12:12:15.205049   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.205060   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:15.205067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:15.205113   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:15.240190   86402 cri.go:89] found id: ""
	I1104 12:12:15.240220   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.240231   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:15.240249   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:15.240263   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:15.290208   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:15.290241   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:15.305216   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:15.305258   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:15.375713   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:15.375735   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:15.375746   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:15.456517   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:15.456552   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:15.209380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:17.708299   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:16.056359   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:18.556166   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.050834   86301 pod_ready.go:82] duration metric: took 4m0.001048639s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:20.050863   86301 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:20.050874   86301 pod_ready.go:39] duration metric: took 4m5.585310983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:20.050889   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:20.050919   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:20.050968   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:20.088440   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.088466   86301 cri.go:89] found id: ""
	I1104 12:12:20.088476   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:20.088523   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.092502   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:20.092575   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:20.126599   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:20.126621   86301 cri.go:89] found id: ""
	I1104 12:12:20.126629   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:20.126687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.130617   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:20.130686   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:20.169664   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.169687   86301 cri.go:89] found id: ""
	I1104 12:12:20.169696   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:20.169750   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.173881   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:20.173920   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:20.209271   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.209292   86301 cri.go:89] found id: ""
	I1104 12:12:20.209299   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:20.209354   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.214187   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:20.214254   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:20.248683   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.248702   86301 cri.go:89] found id: ""
	I1104 12:12:20.248709   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:20.248757   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.252501   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:20.252574   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:20.286367   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:20.286406   86301 cri.go:89] found id: ""
	I1104 12:12:20.286415   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:20.286491   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:17.992855   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:18.011370   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:18.011446   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:18.054937   86402 cri.go:89] found id: ""
	I1104 12:12:18.054961   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.054968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:18.054974   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:18.055026   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:18.107769   86402 cri.go:89] found id: ""
	I1104 12:12:18.107802   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.107814   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:18.107821   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:18.107887   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:18.141932   86402 cri.go:89] found id: ""
	I1104 12:12:18.141959   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.141968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:18.141974   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:18.142021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:18.174322   86402 cri.go:89] found id: ""
	I1104 12:12:18.174345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.174353   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:18.174361   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:18.174514   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:18.206742   86402 cri.go:89] found id: ""
	I1104 12:12:18.206766   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.206776   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:18.206782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:18.206840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:18.240322   86402 cri.go:89] found id: ""
	I1104 12:12:18.240345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.240358   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:18.240363   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:18.240420   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:18.272081   86402 cri.go:89] found id: ""
	I1104 12:12:18.272110   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.272121   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:18.272128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:18.272211   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:18.308604   86402 cri.go:89] found id: ""
	I1104 12:12:18.308629   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.308637   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:18.308646   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:18.308655   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:18.392854   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:18.392892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:18.429632   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:18.429665   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:18.481082   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:18.481120   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:18.494730   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:18.494758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:18.562098   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.063223   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:21.075655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:21.075714   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:21.117762   86402 cri.go:89] found id: ""
	I1104 12:12:21.117794   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.117807   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:21.117817   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:21.117881   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:21.153256   86402 cri.go:89] found id: ""
	I1104 12:12:21.153281   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.153289   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:21.153295   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:21.153355   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:21.191477   86402 cri.go:89] found id: ""
	I1104 12:12:21.191519   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.191539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:21.191547   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:21.191618   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:21.228378   86402 cri.go:89] found id: ""
	I1104 12:12:21.228411   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.228424   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:21.228431   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:21.228495   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:21.265452   86402 cri.go:89] found id: ""
	I1104 12:12:21.265483   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.265493   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:21.265501   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:21.265561   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:21.301073   86402 cri.go:89] found id: ""
	I1104 12:12:21.301099   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.301108   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:21.301114   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:21.301182   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:21.337952   86402 cri.go:89] found id: ""
	I1104 12:12:21.337977   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.337986   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:21.337996   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:21.338053   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:21.371895   86402 cri.go:89] found id: ""
	I1104 12:12:21.371920   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.371929   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:21.371937   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:21.371950   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:21.429757   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:21.429789   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:21.444365   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.444418   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:21.510971   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.510990   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:21.511002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.593605   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:21.593639   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.208004   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:22.706901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:24.708795   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.290832   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:20.290885   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:20.324359   86301 cri.go:89] found id: ""
	I1104 12:12:20.324383   86301 logs.go:282] 0 containers: []
	W1104 12:12:20.324391   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:20.324397   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:20.324442   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:20.364466   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.364488   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:20.364492   86301 cri.go:89] found id: ""
	I1104 12:12:20.364500   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:20.364557   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.368440   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.371967   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:20.371991   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.405547   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:20.405572   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.446936   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:20.446962   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.485811   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:20.485838   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.530775   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:20.530803   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:20.599495   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:20.599542   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:20.614511   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:20.614543   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.659277   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:20.659316   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.694675   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:20.694707   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.187670   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.187705   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:21.308477   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:21.308501   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:21.365526   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:21.365562   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:21.431350   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:21.431381   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:23.969966   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:23.984866   86301 api_server.go:72] duration metric: took 4m16.75797908s to wait for apiserver process to appear ...
	I1104 12:12:23.984895   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:23.984937   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:23.984989   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:24.022326   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.022348   86301 cri.go:89] found id: ""
	I1104 12:12:24.022357   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:24.022428   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.027288   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:24.027377   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:24.064963   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.064986   86301 cri.go:89] found id: ""
	I1104 12:12:24.064993   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:24.065045   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.072027   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:24.072089   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:24.106618   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.106648   86301 cri.go:89] found id: ""
	I1104 12:12:24.106659   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:24.106719   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.110696   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:24.110762   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:24.148575   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:24.148600   86301 cri.go:89] found id: ""
	I1104 12:12:24.148621   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:24.148687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.152673   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:24.152741   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:24.187739   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:24.187763   86301 cri.go:89] found id: ""
	I1104 12:12:24.187771   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:24.187817   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.191551   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:24.191610   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:24.229634   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.229656   86301 cri.go:89] found id: ""
	I1104 12:12:24.229667   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:24.229720   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.234342   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:24.234426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:24.268339   86301 cri.go:89] found id: ""
	I1104 12:12:24.268363   86301 logs.go:282] 0 containers: []
	W1104 12:12:24.268370   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:24.268375   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:24.268426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:24.302347   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.302369   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.302374   86301 cri.go:89] found id: ""
	I1104 12:12:24.302382   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:24.302446   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.306761   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.310867   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:24.310888   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.353396   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:24.353421   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.408025   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:24.408054   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.446150   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:24.446177   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:24.495479   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:24.495505   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:24.568973   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:24.569008   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:24.585522   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:24.585552   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.630483   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:24.630516   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.675828   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:24.675865   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:25.094412   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:25.094457   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:25.191547   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:25.191576   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:25.227482   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:25.227509   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:25.261150   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:25.261184   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.130961   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:24.143387   86402 kubeadm.go:597] duration metric: took 4m4.25221988s to restartPrimaryControlPlane
	W1104 12:12:24.143472   86402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1104 12:12:24.143499   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:12:27.207964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:29.208705   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:27.799329   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:12:27.803543   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:12:27.804545   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:27.804568   86301 api_server.go:131] duration metric: took 3.819666619s to wait for apiserver health ...
	I1104 12:12:27.804576   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:27.804596   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:27.804639   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:27.842317   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:27.842339   86301 cri.go:89] found id: ""
	I1104 12:12:27.842348   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:27.842403   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.846107   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:27.846167   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:27.878833   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:27.878854   86301 cri.go:89] found id: ""
	I1104 12:12:27.878864   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:27.878923   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.882562   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:27.882614   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:27.914077   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:27.914098   86301 cri.go:89] found id: ""
	I1104 12:12:27.914106   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:27.914150   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.917756   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:27.917807   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:27.949534   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:27.949555   86301 cri.go:89] found id: ""
	I1104 12:12:27.949562   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:27.949606   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.953176   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:27.953235   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:27.984491   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:27.984509   86301 cri.go:89] found id: ""
	I1104 12:12:27.984516   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:27.984566   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.988283   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:27.988342   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:28.022752   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.022775   86301 cri.go:89] found id: ""
	I1104 12:12:28.022783   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:28.022829   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.026702   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:28.026767   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:28.062501   86301 cri.go:89] found id: ""
	I1104 12:12:28.062534   86301 logs.go:282] 0 containers: []
	W1104 12:12:28.062545   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:28.062556   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:28.062608   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:28.097167   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.097195   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.097201   86301 cri.go:89] found id: ""
	I1104 12:12:28.097211   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:28.097276   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.101192   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.104712   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:28.104731   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:28.118886   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:28.118911   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:28.220480   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:28.220512   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:28.264205   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:28.264239   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:28.299241   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:28.299274   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:28.339817   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:28.339847   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:28.377987   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:28.378014   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:28.416746   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:28.416772   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:28.484743   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:28.484777   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:28.532089   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:28.532128   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.589039   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:28.589072   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.623955   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:28.623987   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.657953   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:28.657986   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:31.547595   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:31.547624   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.547629   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.547633   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.547637   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.547640   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.547643   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.547649   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.547653   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.547661   86301 system_pods.go:74] duration metric: took 3.743079115s to wait for pod list to return data ...
	I1104 12:12:31.547667   86301 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:31.550088   86301 default_sa.go:45] found service account: "default"
	I1104 12:12:31.550108   86301 default_sa.go:55] duration metric: took 2.435317ms for default service account to be created ...
	I1104 12:12:31.550114   86301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:31.554898   86301 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:31.554924   86301 system_pods.go:89] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.554929   86301 system_pods.go:89] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.554933   86301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.554937   86301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.554941   86301 system_pods.go:89] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.554945   86301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.554952   86301 system_pods.go:89] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.554955   86301 system_pods.go:89] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.554962   86301 system_pods.go:126] duration metric: took 4.842911ms to wait for k8s-apps to be running ...
	I1104 12:12:31.554968   86301 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:31.555008   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:31.568927   86301 system_svc.go:56] duration metric: took 13.948557ms WaitForService to wait for kubelet
	I1104 12:12:31.568958   86301 kubeadm.go:582] duration metric: took 4m24.342075873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:31.568987   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:31.571962   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:31.571983   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:31.571993   86301 node_conditions.go:105] duration metric: took 3.000591ms to run NodePressure ...
	I1104 12:12:31.572004   86301 start.go:241] waiting for startup goroutines ...
	I1104 12:12:31.572010   86301 start.go:246] waiting for cluster config update ...
	I1104 12:12:31.572019   86301 start.go:255] writing updated cluster config ...
	I1104 12:12:31.572277   86301 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:31.620935   86301 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:31.623672   86301 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-036892" cluster and "default" namespace by default
	I1104 12:12:28.876306   86402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.732783523s)
	I1104 12:12:28.876377   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:28.890455   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:12:28.899660   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:12:28.908658   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:12:28.908675   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:12:28.908715   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:12:28.916955   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:12:28.917013   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:12:28.927198   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:12:28.936868   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:12:28.936924   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:12:28.947246   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.956962   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:12:28.957015   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.967293   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:12:28.976975   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:12:28.977030   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:12:28.988547   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:12:29.198333   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:12:31.709511   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:34.207341   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:36.707962   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:39.208138   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:41.208806   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:43.707896   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:46.207316   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:48.707107   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:50.707644   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:52.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:54.708517   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:57.206564   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:59.207122   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:01.207195   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:03.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:05.707763   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:07.708314   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:09.708374   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:10.702085   85500 pod_ready.go:82] duration metric: took 4m0.000587313s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:13:10.702115   85500 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:13:10.702126   85500 pod_ready.go:39] duration metric: took 4m5.542549912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:13:10.702144   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:13:10.702191   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:10.702246   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:10.743079   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:10.743102   85500 cri.go:89] found id: ""
	I1104 12:13:10.743110   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:10.743176   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.747213   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:10.747275   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:10.781435   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:10.781465   85500 cri.go:89] found id: ""
	I1104 12:13:10.781474   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:10.781597   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.785383   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:10.785453   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:10.825927   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:10.825956   85500 cri.go:89] found id: ""
	I1104 12:13:10.825965   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:10.826023   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.829834   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:10.829899   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:10.872447   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:10.872468   85500 cri.go:89] found id: ""
	I1104 12:13:10.872475   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:10.872524   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.876428   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:10.876483   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:10.911092   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:10.911125   85500 cri.go:89] found id: ""
	I1104 12:13:10.911134   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:10.911190   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.915021   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:10.915076   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:10.950838   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:10.950863   85500 cri.go:89] found id: ""
	I1104 12:13:10.950873   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:10.950935   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.954889   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:10.954938   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:10.991580   85500 cri.go:89] found id: ""
	I1104 12:13:10.991609   85500 logs.go:282] 0 containers: []
	W1104 12:13:10.991618   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:10.991625   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:10.991689   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:11.031428   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.031469   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.031474   85500 cri.go:89] found id: ""
	I1104 12:13:11.031484   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:11.031557   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.035810   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.039555   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:11.039582   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:11.076837   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:11.076865   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:11.114534   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:11.114561   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:11.148897   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:11.148935   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.184480   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:11.184511   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:11.256197   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:11.256237   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:11.368984   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:11.369014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:11.414219   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:11.414253   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:11.455746   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:11.455776   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.491699   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:11.491726   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:11.962368   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:11.962400   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:11.975564   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:11.975590   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:12.031427   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:12.031461   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:14.572933   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:13:14.588140   85500 api_server.go:72] duration metric: took 4m17.141131339s to wait for apiserver process to appear ...
	I1104 12:13:14.588168   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:13:14.588196   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:14.588243   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:14.621509   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:14.621534   85500 cri.go:89] found id: ""
	I1104 12:13:14.621543   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:14.621601   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.626328   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:14.626384   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:14.662052   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:14.662079   85500 cri.go:89] found id: ""
	I1104 12:13:14.662115   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:14.662174   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.666018   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:14.666089   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:14.702872   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:14.702897   85500 cri.go:89] found id: ""
	I1104 12:13:14.702910   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:14.702968   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.706809   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:14.706883   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:14.744985   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:14.745005   85500 cri.go:89] found id: ""
	I1104 12:13:14.745012   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:14.745058   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.749441   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:14.749497   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:14.781617   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:14.781644   85500 cri.go:89] found id: ""
	I1104 12:13:14.781653   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:14.781709   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.785971   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:14.786046   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:14.819002   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:14.819029   85500 cri.go:89] found id: ""
	I1104 12:13:14.819038   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:14.819101   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.823075   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:14.823143   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:14.858936   85500 cri.go:89] found id: ""
	I1104 12:13:14.858965   85500 logs.go:282] 0 containers: []
	W1104 12:13:14.858977   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:14.858984   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:14.859048   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:14.898303   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:14.898327   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:14.898333   85500 cri.go:89] found id: ""
	I1104 12:13:14.898341   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:14.898402   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.902325   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.905855   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:14.905880   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:14.973356   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:14.973389   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:14.988655   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:14.988696   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:15.023407   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:15.023443   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:15.078974   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:15.079007   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:15.114147   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:15.114180   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:15.559434   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:15.559477   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:15.666481   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:15.666509   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:15.728066   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:15.728101   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:15.769721   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:15.769759   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:15.802131   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:15.802170   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:15.837613   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:15.837639   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:15.874374   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:15.874407   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:18.413199   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:13:18.418522   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:13:18.419487   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:13:18.419512   85500 api_server.go:131] duration metric: took 3.831337085s to wait for apiserver health ...
	I1104 12:13:18.419521   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:13:18.419549   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:18.419605   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:18.453835   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:18.453856   85500 cri.go:89] found id: ""
	I1104 12:13:18.453865   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:18.453927   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.458136   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:18.458198   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:18.496587   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:18.496623   85500 cri.go:89] found id: ""
	I1104 12:13:18.496634   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:18.496691   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.500451   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:18.500523   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:18.532756   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:18.532785   85500 cri.go:89] found id: ""
	I1104 12:13:18.532795   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:18.532857   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.537239   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:18.537293   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:18.569348   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:18.569374   85500 cri.go:89] found id: ""
	I1104 12:13:18.569382   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:18.569440   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.573491   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:18.573563   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:18.606857   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:18.606886   85500 cri.go:89] found id: ""
	I1104 12:13:18.606896   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:18.606951   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.611158   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:18.611229   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:18.645448   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:18.645467   85500 cri.go:89] found id: ""
	I1104 12:13:18.645474   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:18.645527   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.649014   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:18.649062   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:18.693641   85500 cri.go:89] found id: ""
	I1104 12:13:18.693668   85500 logs.go:282] 0 containers: []
	W1104 12:13:18.693676   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:18.693681   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:18.693728   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:18.733668   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:18.733690   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:18.733695   85500 cri.go:89] found id: ""
	I1104 12:13:18.733702   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:18.733745   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.737419   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.740993   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:18.741014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:19.135942   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:19.135980   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:19.206586   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:19.206623   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:19.222135   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:19.222164   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:19.262746   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:19.262774   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:19.298259   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:19.298287   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:19.338304   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:19.338332   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:19.375163   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:19.375195   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:19.478206   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:19.478234   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:19.526261   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:19.526291   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:19.559922   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:19.559954   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:19.609848   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:19.609879   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:19.648804   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:19.648829   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:22.210690   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:13:22.210718   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.210723   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.210727   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.210730   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.210733   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.210737   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.210752   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.210758   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.210768   85500 system_pods.go:74] duration metric: took 3.791240483s to wait for pod list to return data ...
	I1104 12:13:22.210780   85500 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:13:22.213688   85500 default_sa.go:45] found service account: "default"
	I1104 12:13:22.213709   85500 default_sa.go:55] duration metric: took 2.921691ms for default service account to be created ...
	I1104 12:13:22.213717   85500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:13:22.219436   85500 system_pods.go:86] 8 kube-system pods found
	I1104 12:13:22.219466   85500 system_pods.go:89] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.219475   85500 system_pods.go:89] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.219480   85500 system_pods.go:89] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.219489   85500 system_pods.go:89] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.219495   85500 system_pods.go:89] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.219501   85500 system_pods.go:89] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.219512   85500 system_pods.go:89] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.219523   85500 system_pods.go:89] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.219537   85500 system_pods.go:126] duration metric: took 5.813462ms to wait for k8s-apps to be running ...
	I1104 12:13:22.219551   85500 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:13:22.219612   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:13:22.232887   85500 system_svc.go:56] duration metric: took 13.328078ms WaitForService to wait for kubelet
	I1104 12:13:22.232918   85500 kubeadm.go:582] duration metric: took 4m24.785911082s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:13:22.232941   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:13:22.235641   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:13:22.235662   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:13:22.235675   85500 node_conditions.go:105] duration metric: took 2.728232ms to run NodePressure ...
	I1104 12:13:22.235687   85500 start.go:241] waiting for startup goroutines ...
	I1104 12:13:22.235695   85500 start.go:246] waiting for cluster config update ...
	I1104 12:13:22.235707   85500 start.go:255] writing updated cluster config ...
	I1104 12:13:22.235962   85500 ssh_runner.go:195] Run: rm -f paused
	I1104 12:13:22.284583   85500 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:13:22.287448   85500 out.go:177] * Done! kubectl is now configured to use "no-preload-908370" cluster and "default" namespace by default
	I1104 12:14:25.090113   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:14:25.090254   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:14:25.091997   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.092065   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.092204   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.092341   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.092480   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:25.092569   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:25.094485   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:25.094582   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:25.094664   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:25.094799   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:25.094891   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:25.095003   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:25.095086   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:25.095186   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:25.095240   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:25.095319   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:25.095403   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:25.095481   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:25.095554   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:25.095614   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:25.095676   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:25.095752   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:25.095828   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:25.095970   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:25.096102   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:25.096169   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:25.096262   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:25.097799   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:25.097920   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:25.098018   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:25.098126   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:25.098211   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:25.098333   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:14:25.098393   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:14:25.098487   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098633   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.098690   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098940   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099074   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099307   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099370   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099528   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099582   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099740   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099758   86402 kubeadm.go:310] 
	I1104 12:14:25.099815   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:14:25.099880   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:14:25.099889   86402 kubeadm.go:310] 
	I1104 12:14:25.099923   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:14:25.099952   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:14:25.100036   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:14:25.100044   86402 kubeadm.go:310] 
	I1104 12:14:25.100197   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:14:25.100237   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:14:25.100267   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:14:25.100273   86402 kubeadm.go:310] 
	I1104 12:14:25.100367   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:14:25.100454   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:14:25.100468   86402 kubeadm.go:310] 
	I1104 12:14:25.100600   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:14:25.100718   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:14:25.100821   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:14:25.100903   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:14:25.100970   86402 kubeadm.go:310] 
	W1104 12:14:25.101033   86402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:14:25.101071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:14:25.536184   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:14:25.550453   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:14:25.560308   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:14:25.560327   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:14:25.560368   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:14:25.569106   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:14:25.569189   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:14:25.578395   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:14:25.587402   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:14:25.587473   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:14:25.596827   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.605359   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:14:25.605420   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.614266   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:14:25.622522   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:14:25.622582   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:14:25.631876   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:14:25.701080   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.701168   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.833997   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.834138   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.834258   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:26.009165   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:26.011976   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:26.012090   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:26.012183   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:26.012333   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:26.012422   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:26.012532   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:26.012619   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:26.012689   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:26.012748   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:26.012851   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:26.012978   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:26.013025   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:26.013102   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:26.399153   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:26.470449   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:27.078991   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:27.181622   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:27.205149   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:27.205300   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:27.205383   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:27.355614   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:27.357678   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:27.357840   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:27.363942   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:27.365004   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:27.367237   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:27.368087   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:15:07.369845   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:15:07.370222   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:07.370464   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:12.370802   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:12.371041   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:22.371417   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:22.371584   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:42.371725   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:42.371932   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.370871   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:16:22.371150   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.371181   86402 kubeadm.go:310] 
	I1104 12:16:22.371222   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:16:22.371297   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:16:22.371309   86402 kubeadm.go:310] 
	I1104 12:16:22.371371   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:16:22.371435   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:16:22.371576   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:16:22.371588   86402 kubeadm.go:310] 
	I1104 12:16:22.371726   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:16:22.371784   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:16:22.371863   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:16:22.371879   86402 kubeadm.go:310] 
	I1104 12:16:22.372004   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:16:22.372155   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:16:22.372172   86402 kubeadm.go:310] 
	I1104 12:16:22.372338   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:16:22.372435   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:16:22.372566   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:16:22.372680   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:16:22.372718   86402 kubeadm.go:310] 
	I1104 12:16:22.372948   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:16:22.373110   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:16:22.373289   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:16:22.373328   86402 kubeadm.go:394] duration metric: took 8m2.53443537s to StartCluster
	I1104 12:16:22.373379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:16:22.373431   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:16:22.410373   86402 cri.go:89] found id: ""
	I1104 12:16:22.410409   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.410418   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:16:22.410424   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:16:22.410485   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:16:22.447939   86402 cri.go:89] found id: ""
	I1104 12:16:22.447963   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.447971   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:16:22.447977   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:16:22.448021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:16:22.479234   86402 cri.go:89] found id: ""
	I1104 12:16:22.479263   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.479274   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:16:22.479280   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:16:22.479341   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:16:22.512783   86402 cri.go:89] found id: ""
	I1104 12:16:22.512814   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.512825   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:16:22.512832   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:16:22.512895   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:16:22.549483   86402 cri.go:89] found id: ""
	I1104 12:16:22.549510   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.549520   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:16:22.549527   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:16:22.549593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:16:22.582339   86402 cri.go:89] found id: ""
	I1104 12:16:22.582382   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.582393   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:16:22.582402   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:16:22.582471   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:16:22.613545   86402 cri.go:89] found id: ""
	I1104 12:16:22.613574   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.613585   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:16:22.613593   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:16:22.613656   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:16:22.644488   86402 cri.go:89] found id: ""
	I1104 12:16:22.644517   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.644528   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:16:22.644539   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:16:22.644551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:16:22.681138   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:16:22.681169   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:16:22.734551   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:16:22.734586   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:16:22.750140   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:16:22.750178   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:16:22.837631   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:16:22.837657   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:16:22.837673   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1104 12:16:22.961154   86402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:16:22.961221   86402 out.go:270] * 
	W1104 12:16:22.961295   86402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.961310   86402 out.go:270] * 
	W1104 12:16:22.962053   86402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:16:22.965021   86402 out.go:201] 
	W1104 12:16:22.966262   86402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.966326   86402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:16:22.966377   86402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:16:22.967953   86402 out.go:201] 
	
	
	==> CRI-O <==
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.786406583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722584786379518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=722ea18f-f92c-4943-aef8-75f747179875 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.786996582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7837021f-9df4-4434-a9e0-d2867c1376b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.787048536Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7837021f-9df4-4434-a9e0-d2867c1376b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.787079491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7837021f-9df4-4434-a9e0-d2867c1376b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.817070459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cba6b47-fda3-46bf-9840-d4ce338bf8ed name=/runtime.v1.RuntimeService/Version
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.817142886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cba6b47-fda3-46bf-9840-d4ce338bf8ed name=/runtime.v1.RuntimeService/Version
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.818708402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f51ba7a7-626b-421b-8ac1-c82fd2bc82ca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.819061599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722584819041112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f51ba7a7-626b-421b-8ac1-c82fd2bc82ca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.819683670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10a5673d-53c2-4717-878a-024755e667f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.819757518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10a5673d-53c2-4717-878a-024755e667f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.819809067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=10a5673d-53c2-4717-878a-024755e667f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.851369077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1de8badf-95b4-4a89-a16f-1947c46b32eb name=/runtime.v1.RuntimeService/Version
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.851442204Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1de8badf-95b4-4a89-a16f-1947c46b32eb name=/runtime.v1.RuntimeService/Version
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.852412286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cfb9560-1420-43eb-a507-5b02d3e8ec4f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.852779186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722584852757769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cfb9560-1420-43eb-a507-5b02d3e8ec4f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.853392198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70fb55ad-461f-48e5-bed3-554b0020ef9e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.853466956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70fb55ad-461f-48e5-bed3-554b0020ef9e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.853500297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=70fb55ad-461f-48e5-bed3-554b0020ef9e name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.884556762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f10c6d5-abda-4405-9e6c-729163d43112 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.884625970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f10c6d5-abda-4405-9e6c-729163d43112 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.886092715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d61fa6a-6bbe-45c1-a5a2-a7248ba1a1ab name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.886475322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722584886454436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d61fa6a-6bbe-45c1-a5a2-a7248ba1a1ab name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.887097922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c7ab121-bff8-4ff4-8732-8ba45519d3be name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.887142078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c7ab121-bff8-4ff4-8732-8ba45519d3be name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:16:24 old-k8s-version-589257 crio[626]: time="2024-11-04 12:16:24.887173522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8c7ab121-bff8-4ff4-8732-8ba45519d3be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 4 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051714] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov 4 12:08] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.909177] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.435497] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.440051] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.115131] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.206664] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.118752] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.257608] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +6.231117] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.063384] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.883713] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[ +13.758834] kauditd_printk_skb: 46 callbacks suppressed
	[Nov 4 12:12] systemd-fstab-generator[5108]: Ignoring "noauto" option for root device
	[Nov 4 12:14] systemd-fstab-generator[5387]: Ignoring "noauto" option for root device
	[  +0.067248] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:16:25 up 8 min,  0 users,  load average: 0.01, 0.08, 0.05
	Linux old-k8s-version-589257 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00012a060, 0xc000668750)
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: goroutine 147 [select]:
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008bfef0, 0x4f0ac20, 0xc0005996d0, 0x1, 0xc00012a060)
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024ce00, 0xc00012a060)
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0004f4870, 0xc00069cb00)
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5572]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Nov 04 12:16:22 old-k8s-version-589257 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 04 12:16:22 old-k8s-version-589257 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 04 12:16:22 old-k8s-version-589257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Nov 04 12:16:22 old-k8s-version-589257 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 04 12:16:22 old-k8s-version-589257 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5630]: I1104 12:16:22.849255    5630 server.go:416] Version: v1.20.0
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5630]: I1104 12:16:22.849628    5630 server.go:837] Client rotation is on, will bootstrap in background
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5630]: I1104 12:16:22.851535    5630 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5630]: W1104 12:16:22.852463    5630 manager.go:159] Cannot detect current cgroup on cgroup v2
	Nov 04 12:16:22 old-k8s-version-589257 kubelet[5630]: I1104 12:16:22.852752    5630 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (239.894205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-589257" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (724.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-325116 -n embed-certs-325116
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-11-04 12:21:14.089635135 +0000 UTC m=+6265.444693912
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-325116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-325116 logs -n 25: (1.979599867s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:04:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:04:21.684777   86402 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:04:21.684885   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.684893   86402 out.go:358] Setting ErrFile to fd 2...
	I1104 12:04:21.684897   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.685085   86402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:04:21.685618   86402 out.go:352] Setting JSON to false
	I1104 12:04:21.686501   86402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10013,"bootTime":1730711849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:04:21.686603   86402 start.go:139] virtualization: kvm guest
	I1104 12:04:21.688652   86402 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:04:21.690121   86402 notify.go:220] Checking for updates...
	I1104 12:04:21.690173   86402 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:04:21.691712   86402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:04:21.693100   86402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:04:21.694334   86402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:04:21.695431   86402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:04:21.696680   86402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:04:21.698271   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:04:21.698697   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.698738   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.713382   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I1104 12:04:21.713861   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.714357   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.714378   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.714696   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.714872   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.716711   86402 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 12:04:21.718136   86402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:04:21.718573   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.718617   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.733074   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1104 12:04:21.733525   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.733939   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.733955   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.734252   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.734410   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.770049   86402 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:04:21.771735   86402 start.go:297] selected driver: kvm2
	I1104 12:04:21.771748   86402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.771851   86402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:04:21.772615   86402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.772709   86402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:04:21.787662   86402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:04:21.788158   86402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:04:21.788201   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:04:21.788238   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:04:21.788282   86402 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.788422   86402 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.790364   86402 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 12:04:20.849476   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:20.393451   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:04:20.393484   86301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:20.393492   86301 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:20.393580   86301 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:20.393594   86301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:04:20.393670   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:04:20.393863   86301 start.go:360] acquireMachinesLock for default-k8s-diff-port-036892: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:21.791568   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:04:21.791599   86402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:21.791608   86402 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:21.791668   86402 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:21.791678   86402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 12:04:21.791755   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:04:21.791918   86402 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:26.929512   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:30.001546   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:36.081486   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:39.153496   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:45.233535   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:48.305510   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:54.385555   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:57.457513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:03.537513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:06.609487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:12.689475   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:15.761508   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:21.841502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:24.913609   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:30.993499   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:34.065502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:40.145511   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:43.217478   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:49.297518   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:52.369526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:58.449509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:01.521498   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:07.601506   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:10.673509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:16.753487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:19.825549   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:25.905526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:28.977526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:35.057466   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:38.129670   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:44.209517   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:47.281541   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:53.361542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:56.433564   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:02.513462   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:05.585513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:11.665480   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:14.737542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:17.742001   85759 start.go:364] duration metric: took 4m26.438155925s to acquireMachinesLock for "embed-certs-325116"
	I1104 12:07:17.742060   85759 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:17.742068   85759 fix.go:54] fixHost starting: 
	I1104 12:07:17.742418   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:17.742470   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:17.758611   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I1104 12:07:17.759173   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:17.759750   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:17.759774   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:17.760116   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:17.760326   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:17.760498   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:17.762313   85759 fix.go:112] recreateIfNeeded on embed-certs-325116: state=Stopped err=<nil>
	I1104 12:07:17.762335   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	W1104 12:07:17.762469   85759 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:17.764411   85759 out.go:177] * Restarting existing kvm2 VM for "embed-certs-325116" ...
	I1104 12:07:17.739255   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:17.739306   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739691   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:07:17.739718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739888   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:07:17.741864   85500 machine.go:96] duration metric: took 4m37.421766695s to provisionDockerMachine
	I1104 12:07:17.741908   85500 fix.go:56] duration metric: took 4m37.442993443s for fixHost
	I1104 12:07:17.741918   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 4m37.443015642s
	W1104 12:07:17.741938   85500 start.go:714] error starting host: provision: host is not running
	W1104 12:07:17.742034   85500 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1104 12:07:17.742044   85500 start.go:729] Will try again in 5 seconds ...
	I1104 12:07:17.765958   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Start
	I1104 12:07:17.766220   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring networks are active...
	I1104 12:07:17.767191   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network default is active
	I1104 12:07:17.767589   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network mk-embed-certs-325116 is active
	I1104 12:07:17.767984   85759 main.go:141] libmachine: (embed-certs-325116) Getting domain xml...
	I1104 12:07:17.768804   85759 main.go:141] libmachine: (embed-certs-325116) Creating domain...
	I1104 12:07:18.996135   85759 main.go:141] libmachine: (embed-certs-325116) Waiting to get IP...
	I1104 12:07:18.997002   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:18.997542   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:18.997615   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:18.997513   87021 retry.go:31] will retry after 239.606839ms: waiting for machine to come up
	I1104 12:07:19.239054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.239579   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.239602   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.239528   87021 retry.go:31] will retry after 303.459257ms: waiting for machine to come up
	I1104 12:07:19.545134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.545597   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.545633   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.545544   87021 retry.go:31] will retry after 394.511523ms: waiting for machine to come up
	I1104 12:07:19.942226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.942607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.942630   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.942576   87021 retry.go:31] will retry after 381.618515ms: waiting for machine to come up
	I1104 12:07:20.326265   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.326707   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.326738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.326651   87021 retry.go:31] will retry after 584.226748ms: waiting for machine to come up
	I1104 12:07:20.912117   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.912575   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.912607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.912524   87021 retry.go:31] will retry after 770.080519ms: waiting for machine to come up
	I1104 12:07:22.742250   85500 start.go:360] acquireMachinesLock for no-preload-908370: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:07:21.684620   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:21.685074   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:21.685103   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:21.685026   87021 retry.go:31] will retry after 1.170018806s: waiting for machine to come up
	I1104 12:07:22.856736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:22.857104   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:22.857132   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:22.857048   87021 retry.go:31] will retry after 1.467304538s: waiting for machine to come up
	I1104 12:07:24.326735   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:24.327197   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:24.327220   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:24.327148   87021 retry.go:31] will retry after 1.676202737s: waiting for machine to come up
	I1104 12:07:26.006035   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:26.006515   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:26.006538   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:26.006460   87021 retry.go:31] will retry after 1.8778328s: waiting for machine to come up
	I1104 12:07:27.886226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:27.886634   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:27.886656   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:27.886579   87021 retry.go:31] will retry after 2.886548821s: waiting for machine to come up
	I1104 12:07:30.776677   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:30.777080   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:30.777102   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:30.777039   87021 retry.go:31] will retry after 3.108966144s: waiting for machine to come up
	I1104 12:07:35.049920   86301 start.go:364] duration metric: took 3m14.656022924s to acquireMachinesLock for "default-k8s-diff-port-036892"
	I1104 12:07:35.050007   86301 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:35.050019   86301 fix.go:54] fixHost starting: 
	I1104 12:07:35.050381   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:35.050436   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:35.067928   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I1104 12:07:35.068445   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:35.068953   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:07:35.068976   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:35.069353   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:35.069560   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:35.069692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:07:35.071231   86301 fix.go:112] recreateIfNeeded on default-k8s-diff-port-036892: state=Stopped err=<nil>
	I1104 12:07:35.071252   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	W1104 12:07:35.071401   86301 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:35.073762   86301 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-036892" ...
	I1104 12:07:35.075114   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Start
	I1104 12:07:35.075311   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring networks are active...
	I1104 12:07:35.076105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network default is active
	I1104 12:07:35.076534   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network mk-default-k8s-diff-port-036892 is active
	I1104 12:07:35.076946   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Getting domain xml...
	I1104 12:07:35.077641   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Creating domain...
	I1104 12:07:33.887738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888147   85759 main.go:141] libmachine: (embed-certs-325116) Found IP for machine: 192.168.39.47
	I1104 12:07:33.888176   85759 main.go:141] libmachine: (embed-certs-325116) Reserving static IP address...
	I1104 12:07:33.888206   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has current primary IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888737   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.888769   85759 main.go:141] libmachine: (embed-certs-325116) DBG | skip adding static IP to network mk-embed-certs-325116 - found existing host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"}
	I1104 12:07:33.888783   85759 main.go:141] libmachine: (embed-certs-325116) Reserved static IP address: 192.168.39.47
	I1104 12:07:33.888795   85759 main.go:141] libmachine: (embed-certs-325116) Waiting for SSH to be available...
	I1104 12:07:33.888812   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Getting to WaitForSSH function...
	I1104 12:07:33.891130   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891493   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.891520   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891670   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH client type: external
	I1104 12:07:33.891693   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa (-rw-------)
	I1104 12:07:33.891732   85759 main.go:141] libmachine: (embed-certs-325116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:33.891748   85759 main.go:141] libmachine: (embed-certs-325116) DBG | About to run SSH command:
	I1104 12:07:33.891773   85759 main.go:141] libmachine: (embed-certs-325116) DBG | exit 0
	I1104 12:07:34.012989   85759 main.go:141] libmachine: (embed-certs-325116) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:34.013457   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetConfigRaw
	I1104 12:07:34.014162   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.016645   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017028   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.017062   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017347   85759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/config.json ...
	I1104 12:07:34.017577   85759 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:34.017596   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.017824   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.020134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020416   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.020449   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020570   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.020745   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.020905   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.021059   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.021313   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.021505   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.021515   85759 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:34.125266   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:34.125305   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125556   85759 buildroot.go:166] provisioning hostname "embed-certs-325116"
	I1104 12:07:34.125583   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125781   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.128180   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128486   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.128514   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128603   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.128758   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128890   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.129166   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.129371   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.129394   85759 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-325116 && echo "embed-certs-325116" | sudo tee /etc/hostname
	I1104 12:07:34.242027   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-325116
	
	I1104 12:07:34.242054   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.244671   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.244984   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.245019   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.245159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.245337   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245514   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245661   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.245810   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.245971   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.245986   85759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-325116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-325116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-325116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:34.357178   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:34.357204   85759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:34.357220   85759 buildroot.go:174] setting up certificates
	I1104 12:07:34.357241   85759 provision.go:84] configureAuth start
	I1104 12:07:34.357250   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.357533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.359993   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360308   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.360327   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.362459   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362750   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.362786   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362932   85759 provision.go:143] copyHostCerts
	I1104 12:07:34.362986   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:34.363022   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:34.363109   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:34.363231   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:34.363242   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:34.363282   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:34.363357   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:34.363368   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:34.363399   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:34.363503   85759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.embed-certs-325116 san=[127.0.0.1 192.168.39.47 embed-certs-325116 localhost minikube]
	I1104 12:07:34.453223   85759 provision.go:177] copyRemoteCerts
	I1104 12:07:34.453295   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:34.453317   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.455736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456022   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.456054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456230   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.456406   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.456539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.456631   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.539172   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:34.561889   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:07:34.585111   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:07:34.607449   85759 provision.go:87] duration metric: took 250.195255ms to configureAuth
	I1104 12:07:34.607495   85759 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:34.607809   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:34.607952   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.610672   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611009   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.611032   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611253   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.611444   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611600   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611739   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.611917   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.612086   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.612101   85759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:34.823086   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:34.823114   85759 machine.go:96] duration metric: took 805.522353ms to provisionDockerMachine
	I1104 12:07:34.823128   85759 start.go:293] postStartSetup for "embed-certs-325116" (driver="kvm2")
	I1104 12:07:34.823138   85759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:34.823174   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.823451   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:34.823489   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.826112   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826453   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.826482   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826581   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.826756   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.826886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.826998   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.907354   85759 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:34.911229   85759 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:34.911246   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:34.911316   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:34.911402   85759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:34.911516   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:34.920149   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:34.942468   85759 start.go:296] duration metric: took 119.32654ms for postStartSetup
	I1104 12:07:34.942517   85759 fix.go:56] duration metric: took 17.200448721s for fixHost
	I1104 12:07:34.942540   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.945295   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945659   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.945685   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945847   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.946006   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946173   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946311   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.946442   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.946583   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.946592   85759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:35.049767   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722055.017047529
	
	I1104 12:07:35.049790   85759 fix.go:216] guest clock: 1730722055.017047529
	I1104 12:07:35.049797   85759 fix.go:229] Guest: 2024-11-04 12:07:35.017047529 +0000 UTC Remote: 2024-11-04 12:07:34.942522008 +0000 UTC m=+283.781167350 (delta=74.525521ms)
	I1104 12:07:35.049829   85759 fix.go:200] guest clock delta is within tolerance: 74.525521ms
	I1104 12:07:35.049834   85759 start.go:83] releasing machines lock for "embed-certs-325116", held for 17.307794416s
	I1104 12:07:35.049859   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.050137   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:35.052845   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053238   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.053269   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054239   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054337   85759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:35.054383   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.054502   85759 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:35.054539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.057289   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057391   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057733   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057778   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057802   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057820   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057959   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.057996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.058110   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058296   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058316   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.058658   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.134602   85759 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:35.158961   85759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:35.303038   85759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:35.309611   85759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:35.309674   85759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:35.325082   85759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:35.325142   85759 start.go:495] detecting cgroup driver to use...
	I1104 12:07:35.325211   85759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:35.341673   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:35.355506   85759 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:35.355569   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:35.369017   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:35.382745   85759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:35.498985   85759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:35.648628   85759 docker.go:233] disabling docker service ...
	I1104 12:07:35.648702   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:35.666912   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:35.679786   85759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:35.799284   85759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:35.931842   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:35.945707   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:35.965183   85759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:35.965269   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.975446   85759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:35.975514   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.985968   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.996462   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.006840   85759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:36.017174   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.027013   85759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.044572   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.054046   85759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:36.063355   85759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:36.063399   85759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:36.075157   85759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:36.084713   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:36.205088   85759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:36.299330   85759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:36.299423   85759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:36.304194   85759 start.go:563] Will wait 60s for crictl version
	I1104 12:07:36.304248   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:07:36.308041   85759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:36.349114   85759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:36.349264   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.378677   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.406751   85759 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:36.335603   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting to get IP...
	I1104 12:07:36.336431   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.336921   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.337007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.336911   87142 retry.go:31] will retry after 289.750795ms: waiting for machine to come up
	I1104 12:07:36.628712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629301   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.629345   87142 retry.go:31] will retry after 356.596321ms: waiting for machine to come up
	I1104 12:07:36.988173   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988663   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988713   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.988626   87142 retry.go:31] will retry after 446.62367ms: waiting for machine to come up
	I1104 12:07:37.437529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438120   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438174   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.438023   87142 retry.go:31] will retry after 482.072639ms: waiting for machine to come up
	I1104 12:07:37.921514   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922025   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922056   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.921983   87142 retry.go:31] will retry after 645.10615ms: waiting for machine to come up
	I1104 12:07:38.569009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569524   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569566   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:38.569432   87142 retry.go:31] will retry after 841.352802ms: waiting for machine to come up
	I1104 12:07:39.412662   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413091   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413112   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:39.413047   87142 retry.go:31] will retry after 878.218722ms: waiting for machine to come up
	I1104 12:07:36.407939   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:36.411021   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411378   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:36.411408   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411599   85759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:36.415528   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:36.427484   85759 kubeadm.go:883] updating cluster {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:36.427616   85759 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:36.427684   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:36.460332   85759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:36.460406   85759 ssh_runner.go:195] Run: which lz4
	I1104 12:07:36.464187   85759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:36.468140   85759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:36.468177   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:37.703067   85759 crio.go:462] duration metric: took 1.238901186s to copy over tarball
	I1104 12:07:37.703136   85759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:39.803761   85759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100578378s)
	I1104 12:07:39.803795   85759 crio.go:469] duration metric: took 2.100697698s to extract the tarball
	I1104 12:07:39.803805   85759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:39.840536   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:39.883410   85759 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:39.883431   85759 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:39.883438   85759 kubeadm.go:934] updating node { 192.168.39.47 8443 v1.31.2 crio true true} ...
	I1104 12:07:39.883531   85759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-325116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:39.883608   85759 ssh_runner.go:195] Run: crio config
	I1104 12:07:39.928280   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:39.928303   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:39.928313   85759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:39.928333   85759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-325116 NodeName:embed-certs-325116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:39.928440   85759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-325116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:39.928495   85759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:39.938496   85759 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:39.938568   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:39.947809   85759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1104 12:07:39.963319   85759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:39.978789   85759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1104 12:07:39.994910   85759 ssh_runner.go:195] Run: grep 192.168.39.47	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:39.998355   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:40.010097   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:40.118679   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:40.134369   85759 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116 for IP: 192.168.39.47
	I1104 12:07:40.134391   85759 certs.go:194] generating shared ca certs ...
	I1104 12:07:40.134429   85759 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:40.134612   85759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:40.134666   85759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:40.134680   85759 certs.go:256] generating profile certs ...
	I1104 12:07:40.134782   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/client.key
	I1104 12:07:40.134880   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key.36f6fb66
	I1104 12:07:40.134929   85759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key
	I1104 12:07:40.135083   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:40.135124   85759 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:40.135140   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:40.135225   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:40.135281   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:40.135315   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:40.135380   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:40.136240   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:40.179608   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:40.227851   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:40.255791   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:40.281672   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1104 12:07:40.305960   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:07:40.332465   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:40.354950   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:07:40.377476   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:40.399291   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:40.420689   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:40.443610   85759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:40.459706   85759 ssh_runner.go:195] Run: openssl version
	I1104 12:07:40.465244   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:40.475375   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479676   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479748   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.485523   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:40.497163   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:40.509090   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513617   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513685   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.519372   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:40.530944   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:40.542569   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.546965   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.547019   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.552470   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:40.562456   85759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:40.566967   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:40.572778   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:40.578409   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:40.584134   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:40.589880   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:40.595604   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:40.601191   85759 kubeadm.go:392] StartCluster: {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:40.601329   85759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:40.601385   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.642970   85759 cri.go:89] found id: ""
	I1104 12:07:40.643034   85759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:40.653420   85759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:40.653446   85759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:40.653496   85759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:40.663023   85759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:40.664008   85759 kubeconfig.go:125] found "embed-certs-325116" server: "https://192.168.39.47:8443"
	I1104 12:07:40.665967   85759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:40.675296   85759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.47
	I1104 12:07:40.675324   85759 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:40.675336   85759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:40.675384   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.718457   85759 cri.go:89] found id: ""
	I1104 12:07:40.718543   85759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:40.733875   85759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:40.743811   85759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:40.743835   85759 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:40.743889   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:07:40.752987   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:40.753048   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:40.762296   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:07:40.771048   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:40.771112   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:40.780163   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.789500   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:40.789566   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.799200   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:07:40.808061   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:40.808121   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:40.817445   85759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:40.826803   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.934345   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.292591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293050   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293084   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:40.292988   87142 retry.go:31] will retry after 1.110341741s: waiting for machine to come up
	I1104 12:07:41.405407   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405858   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405885   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:41.405800   87142 retry.go:31] will retry after 1.311587036s: waiting for machine to come up
	I1104 12:07:42.719157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:42.719530   87142 retry.go:31] will retry after 1.999866716s: waiting for machine to come up
	I1104 12:07:44.721872   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722324   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722351   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:44.722278   87142 retry.go:31] will retry after 2.895241769s: waiting for machine to come up
	I1104 12:07:41.512710   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.729355   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.807064   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.888493   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:07:41.888593   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.389674   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.889373   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.389705   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.889548   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.924248   85759 api_server.go:72] duration metric: took 2.035753888s to wait for apiserver process to appear ...
	I1104 12:07:43.924277   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:07:43.924320   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:43.924831   85759 api_server.go:269] stopped: https://192.168.39.47:8443/healthz: Get "https://192.168.39.47:8443/healthz": dial tcp 192.168.39.47:8443: connect: connection refused
	I1104 12:07:44.424651   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.043002   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.043037   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.043054   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.104246   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.104276   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.424506   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.430506   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.430544   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:47.924409   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.937055   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.937083   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:48.424568   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:48.428850   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:07:48.436388   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:07:48.436411   85759 api_server.go:131] duration metric: took 4.512127349s to wait for apiserver health ...
	I1104 12:07:48.436420   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:48.436427   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:48.438220   85759 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:07:48.439495   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:07:48.449650   85759 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:07:48.467313   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:07:48.480777   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:07:48.480823   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:07:48.480834   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:07:48.480845   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:07:48.480859   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:07:48.480876   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:07:48.480893   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:07:48.480907   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:07:48.480916   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:07:48.480928   85759 system_pods.go:74] duration metric: took 13.592864ms to wait for pod list to return data ...
	I1104 12:07:48.480947   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:07:48.487234   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:07:48.487271   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:07:48.487284   85759 node_conditions.go:105] duration metric: took 6.331259ms to run NodePressure ...
	I1104 12:07:48.487313   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:48.756654   85759 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764840   85759 kubeadm.go:739] kubelet initialised
	I1104 12:07:48.764863   85759 kubeadm.go:740] duration metric: took 8.175857ms waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764871   85759 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:48.772653   85759 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.784158   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784198   85759 pod_ready.go:82] duration metric: took 11.515605ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.784211   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784220   85759 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.791264   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791297   85759 pod_ready.go:82] duration metric: took 7.066247ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.791310   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791326   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.798259   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798294   85759 pod_ready.go:82] duration metric: took 6.954559ms for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.798304   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798312   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.872019   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872058   85759 pod_ready.go:82] duration metric: took 73.723761ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.872069   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872075   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.271210   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271252   85759 pod_ready.go:82] duration metric: took 399.167509ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.271264   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271272   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.671430   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671453   85759 pod_ready.go:82] duration metric: took 400.174495ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.671469   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671475   85759 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:50.070546   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070576   85759 pod_ready.go:82] duration metric: took 399.092108ms for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:50.070587   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070596   85759 pod_ready.go:39] duration metric: took 1.305717298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:50.070615   85759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:07:50.082815   85759 ops.go:34] apiserver oom_adj: -16
	I1104 12:07:50.082839   85759 kubeadm.go:597] duration metric: took 9.429385589s to restartPrimaryControlPlane
	I1104 12:07:50.082850   85759 kubeadm.go:394] duration metric: took 9.481667011s to StartCluster
	I1104 12:07:50.082871   85759 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.082952   85759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:07:50.086014   85759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.086562   85759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:07:50.086628   85759 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:07:50.086740   85759 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-325116"
	I1104 12:07:50.086763   85759 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-325116"
	I1104 12:07:50.086765   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1104 12:07:50.086776   85759 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:07:50.086774   85759 addons.go:69] Setting default-storageclass=true in profile "embed-certs-325116"
	I1104 12:07:50.086803   85759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-325116"
	I1104 12:07:50.086787   85759 addons.go:69] Setting metrics-server=true in profile "embed-certs-325116"
	I1104 12:07:50.086812   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.086825   85759 addons.go:234] Setting addon metrics-server=true in "embed-certs-325116"
	W1104 12:07:50.086837   85759 addons.go:243] addon metrics-server should already be in state true
	I1104 12:07:50.086866   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.087120   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087148   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087160   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087178   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087247   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087286   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.088320   85759 out.go:177] * Verifying Kubernetes components...
	I1104 12:07:50.089814   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:50.102796   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I1104 12:07:50.102976   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1104 12:07:50.103076   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1104 12:07:50.103462   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103491   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103566   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103990   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104014   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104085   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104101   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104199   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104223   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104368   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104402   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104545   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.104559   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104949   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.104987   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.105081   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.105116   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.108134   85759 addons.go:234] Setting addon default-storageclass=true in "embed-certs-325116"
	W1104 12:07:50.108161   85759 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:07:50.108193   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.108597   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.108648   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.121556   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I1104 12:07:50.122038   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.122504   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.122527   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.122869   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.123107   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.125142   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.125294   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I1104 12:07:50.125613   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.125972   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.125988   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.126279   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.126399   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.127256   85759 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:07:50.127993   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1104 12:07:50.128235   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.128597   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.128843   85759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.128864   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:07:50.128883   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.129066   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.129088   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.129389   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.129882   85759 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:07:47.619516   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620045   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:47.620000   87142 retry.go:31] will retry after 3.554669963s: waiting for machine to come up
	I1104 12:07:50.130127   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.130187   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.131115   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:07:50.131134   85759 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:07:50.131154   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.131899   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132352   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.132375   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132664   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.132830   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.132986   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.133099   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.134698   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135217   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.135246   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.135629   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.135765   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.135908   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.146618   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1104 12:07:50.147639   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.148281   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.148307   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.148617   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.148860   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.150751   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.151010   85759 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.151028   85759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:07:50.151050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.153947   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154385   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.154418   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154560   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.154749   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.154886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.155028   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.278380   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:50.294682   85759 node_ready.go:35] waiting up to 6m0s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:50.355769   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:07:50.355790   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:07:50.375818   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.404741   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:07:50.404766   85759 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:07:50.466718   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.466748   85759 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:07:50.493662   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.503255   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.799735   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.799772   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800039   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800086   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.800094   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:50.800107   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.800159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800382   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800394   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.810586   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.810857   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.810876   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810893   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.484326   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484356   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484671   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484687   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484695   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484702   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484899   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484938   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484950   85759 addons.go:475] Verifying addon metrics-server=true in "embed-certs-325116"
	I1104 12:07:51.549507   85759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.046214827s)
	I1104 12:07:51.549559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549569   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.549886   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.549906   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.549909   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.549916   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549923   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.550143   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.550164   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.552039   85759 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1104 12:07:52.573915   86402 start.go:364] duration metric: took 3m30.781955626s to acquireMachinesLock for "old-k8s-version-589257"
	I1104 12:07:52.573984   86402 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:52.573996   86402 fix.go:54] fixHost starting: 
	I1104 12:07:52.574443   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:52.574500   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:52.594310   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1104 12:07:52.594822   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:52.595317   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:07:52.595347   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:52.595727   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:52.595924   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:07:52.596093   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 12:07:52.597578   86402 fix.go:112] recreateIfNeeded on old-k8s-version-589257: state=Stopped err=<nil>
	I1104 12:07:52.597615   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	W1104 12:07:52.597752   86402 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:52.599659   86402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	I1104 12:07:51.176791   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177282   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Found IP for machine: 192.168.72.130
	I1104 12:07:51.177313   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has current primary IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177325   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserving static IP address...
	I1104 12:07:51.177817   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.177863   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | skip adding static IP to network mk-default-k8s-diff-port-036892 - found existing host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"}
	I1104 12:07:51.177876   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserved static IP address: 192.168.72.130
	I1104 12:07:51.177890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for SSH to be available...
	I1104 12:07:51.177897   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Getting to WaitForSSH function...
	I1104 12:07:51.180038   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180440   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.180466   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH client type: external
	I1104 12:07:51.180611   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa (-rw-------)
	I1104 12:07:51.180747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:51.180777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | About to run SSH command:
	I1104 12:07:51.180795   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | exit 0
	I1104 12:07:51.309075   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:51.309445   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetConfigRaw
	I1104 12:07:51.310162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.312651   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313061   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.313090   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313460   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:07:51.313720   86301 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:51.313747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:51.313926   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.316269   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316782   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.316829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316937   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.317162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317335   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317598   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.317777   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.317981   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.317994   86301 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:51.441588   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:51.441626   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.441876   86301 buildroot.go:166] provisioning hostname "default-k8s-diff-port-036892"
	I1104 12:07:51.441902   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.442097   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.445155   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445637   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.445670   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445820   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.446013   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446186   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446352   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.446539   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.446753   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.446773   86301 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-036892 && echo "default-k8s-diff-port-036892" | sudo tee /etc/hostname
	I1104 12:07:51.578973   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-036892
	
	I1104 12:07:51.579004   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.581759   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.582135   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582299   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.582455   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582582   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.582834   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.583006   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.583022   86301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-036892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-036892/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-036892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:51.702410   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:51.702441   86301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:51.702471   86301 buildroot.go:174] setting up certificates
	I1104 12:07:51.702483   86301 provision.go:84] configureAuth start
	I1104 12:07:51.702492   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.702789   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.705067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.705449   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705567   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.707341   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707627   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.707658   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707748   86301 provision.go:143] copyHostCerts
	I1104 12:07:51.707805   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:51.707818   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:51.707870   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:51.707969   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:51.707978   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:51.707999   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:51.708061   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:51.708067   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:51.708085   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:51.708132   86301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-036892 san=[127.0.0.1 192.168.72.130 default-k8s-diff-port-036892 localhost minikube]
	I1104 12:07:51.935898   86301 provision.go:177] copyRemoteCerts
	I1104 12:07:51.935973   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:51.936008   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.938722   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939100   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.939134   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939266   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.939462   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.939609   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.939786   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.027147   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:52.054828   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1104 12:07:52.078755   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 12:07:52.101312   86301 provision.go:87] duration metric: took 398.817409ms to configureAuth
	I1104 12:07:52.101338   86301 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:52.101523   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:52.101608   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.104187   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104549   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.104581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104700   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.104890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105028   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.105319   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.105490   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.105514   86301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:52.331840   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:52.331865   86301 machine.go:96] duration metric: took 1.018128337s to provisionDockerMachine
	I1104 12:07:52.331875   86301 start.go:293] postStartSetup for "default-k8s-diff-port-036892" (driver="kvm2")
	I1104 12:07:52.331884   86301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:52.331898   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.332229   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:52.332261   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.334710   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335005   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.335036   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335176   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.335342   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.335447   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.335547   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.419392   86301 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:52.423306   86301 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:52.423335   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:52.423396   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:52.423483   86301 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:52.423575   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:52.432625   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:52.456616   86301 start.go:296] duration metric: took 124.726284ms for postStartSetup
	I1104 12:07:52.456664   86301 fix.go:56] duration metric: took 17.406645021s for fixHost
	I1104 12:07:52.456689   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.459189   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.459573   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.459967   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460093   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460218   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.460349   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.460521   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.460533   86301 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:52.573760   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722072.546172571
	
	I1104 12:07:52.573781   86301 fix.go:216] guest clock: 1730722072.546172571
	I1104 12:07:52.573787   86301 fix.go:229] Guest: 2024-11-04 12:07:52.546172571 +0000 UTC Remote: 2024-11-04 12:07:52.45666981 +0000 UTC m=+212.207079326 (delta=89.502761ms)
	I1104 12:07:52.573827   86301 fix.go:200] guest clock delta is within tolerance: 89.502761ms
	I1104 12:07:52.573832   86301 start.go:83] releasing machines lock for "default-k8s-diff-port-036892", held for 17.523849814s
	I1104 12:07:52.573856   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.574109   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:52.576773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577125   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.577151   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577304   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577776   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577950   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.578043   86301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:52.578079   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.578133   86301 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:52.578159   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.580773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.580909   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581154   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581179   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581196   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581286   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581488   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581660   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581677   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581770   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.581823   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581946   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.683801   86301 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:52.689498   86301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:52.830236   86301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:52.835868   86301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:52.835951   86301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:52.851557   86301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:52.851586   86301 start.go:495] detecting cgroup driver to use...
	I1104 12:07:52.851656   86301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:52.868648   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:52.883434   86301 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:52.883507   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:52.898233   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:52.912615   86301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:53.036342   86301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:53.183326   86301 docker.go:233] disabling docker service ...
	I1104 12:07:53.183407   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:53.197465   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:53.210118   86301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:53.354857   86301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:53.490760   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:53.506829   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:53.526401   86301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:53.526464   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.537264   86301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:53.537339   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.547882   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.558039   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.569347   86301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:53.579931   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.589594   86301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.606753   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.623316   86301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:53.638183   86301 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:53.638245   86301 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:53.656452   86301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:53.666343   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:53.784882   86301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:53.879727   86301 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:53.879790   86301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:53.884438   86301 start.go:563] Will wait 60s for crictl version
	I1104 12:07:53.884494   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:07:53.887785   86301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:53.926395   86301 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:53.926496   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.963049   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.996513   86301 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:53.997774   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:54.000829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001214   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:54.001300   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001469   86301 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:54.005521   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:54.021723   86301 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:54.021915   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:54.021979   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:54.072114   86301 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:54.072178   86301 ssh_runner.go:195] Run: which lz4
	I1104 12:07:54.077106   86301 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:54.081979   86301 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:54.082018   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:51.553141   85759 addons.go:510] duration metric: took 1.466523338s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1104 12:07:52.298494   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:54.299895   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:52.600997   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .Start
	I1104 12:07:52.601180   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 12:07:52.602131   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 12:07:52.602560   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 12:07:52.603030   86402 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 12:07:52.603859   86402 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 12:07:53.855214   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 12:07:53.856063   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:53.856539   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:53.856594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:53.856513   87367 retry.go:31] will retry after 268.725451ms: waiting for machine to come up
	I1104 12:07:54.127094   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.127584   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.127612   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.127560   87367 retry.go:31] will retry after 239.665225ms: waiting for machine to come up
	I1104 12:07:54.369139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.369777   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.369798   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.369710   87367 retry.go:31] will retry after 386.228261ms: waiting for machine to come up
	I1104 12:07:54.757191   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.757637   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.757665   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.757591   87367 retry.go:31] will retry after 571.244573ms: waiting for machine to come up
	I1104 12:07:55.330439   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.331187   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.331216   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.331144   87367 retry.go:31] will retry after 539.328185ms: waiting for machine to come up
	I1104 12:07:55.871869   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.872373   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.872403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.872335   87367 retry.go:31] will retry after 879.285089ms: waiting for machine to come up
	I1104 12:07:55.376802   86301 crio.go:462] duration metric: took 1.299729399s to copy over tarball
	I1104 12:07:55.376881   86301 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:57.716230   86301 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.339307666s)
	I1104 12:07:57.716268   86301 crio.go:469] duration metric: took 2.339436958s to extract the tarball
	I1104 12:07:57.716277   86301 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:57.753216   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:57.799042   86301 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:57.799145   86301 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:57.799161   86301 kubeadm.go:934] updating node { 192.168.72.130 8444 v1.31.2 crio true true} ...
	I1104 12:07:57.799273   86301 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-036892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:57.799347   86301 ssh_runner.go:195] Run: crio config
	I1104 12:07:57.851871   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:07:57.851892   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:57.851900   86301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:57.851919   86301 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-036892 NodeName:default-k8s-diff-port-036892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:57.852056   86301 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-036892"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:57.852116   86301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:57.862269   86301 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:57.862343   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:57.872253   86301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1104 12:07:57.889328   86301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:57.908250   86301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1104 12:07:57.926081   86301 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:57.929870   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:57.943872   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:58.070141   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:58.089370   86301 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892 for IP: 192.168.72.130
	I1104 12:07:58.089397   86301 certs.go:194] generating shared ca certs ...
	I1104 12:07:58.089423   86301 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:58.089596   86301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:58.089647   86301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:58.089659   86301 certs.go:256] generating profile certs ...
	I1104 12:07:58.089765   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/client.key
	I1104 12:07:58.089831   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key.713851b2
	I1104 12:07:58.089889   86301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key
	I1104 12:07:58.090054   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:58.090100   86301 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:58.090116   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:58.090149   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:58.090184   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:58.090219   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:58.090279   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:58.090977   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:58.125282   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:58.168289   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:58.210967   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:58.253986   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 12:07:58.280769   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:07:58.308406   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:58.334250   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:07:58.363224   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:58.391795   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:58.420782   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:58.446611   86301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:58.465895   86301 ssh_runner.go:195] Run: openssl version
	I1104 12:07:58.471614   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:58.482139   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486533   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486591   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.492217   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:58.502724   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:58.514146   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518243   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518303   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.523579   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:58.533993   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:58.544137   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548190   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548250   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.553714   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:58.564221   86301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:58.568445   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:58.574072   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:58.579551   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:58.584909   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:58.590102   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:58.595227   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:58.600338   86301 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:58.600445   86301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:58.600492   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.634282   86301 cri.go:89] found id: ""
	I1104 12:07:58.634352   86301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:58.644578   86301 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:58.644597   86301 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:58.644635   86301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:58.654412   86301 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:58.655638   86301 kubeconfig.go:125] found "default-k8s-diff-port-036892" server: "https://192.168.72.130:8444"
	I1104 12:07:58.658639   86301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:58.667867   86301 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I1104 12:07:58.667900   86301 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:58.667913   86301 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:58.667971   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.702765   86301 cri.go:89] found id: ""
	I1104 12:07:58.702844   86301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:58.718368   86301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:58.727671   86301 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:58.727690   86301 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:58.727750   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1104 12:07:58.736350   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:58.736424   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:58.745441   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1104 12:07:58.753945   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:58.754011   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:58.763134   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.771588   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:58.771651   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.780623   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1104 12:07:58.788962   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:58.789036   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:58.798472   86301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:58.808209   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:58.919153   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.679355   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.889628   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.958981   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:00.048061   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:00.048158   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:56.798747   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:57.799286   85759 node_ready.go:49] node "embed-certs-325116" has status "Ready":"True"
	I1104 12:07:57.799308   85759 node_ready.go:38] duration metric: took 7.504592975s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:57.799319   85759 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:57.805595   85759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812394   85759 pod_ready.go:93] pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.812421   85759 pod_ready.go:82] duration metric: took 6.791823ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812434   85759 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818338   85759 pod_ready.go:93] pod "etcd-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.818359   85759 pod_ready.go:82] duration metric: took 5.916571ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818400   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:00.015222   85759 pod_ready.go:103] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"False"
	I1104 12:07:56.752983   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:56.753577   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:56.753613   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:56.753542   87367 retry.go:31] will retry after 1.081359862s: waiting for machine to come up
	I1104 12:07:57.836518   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:57.836963   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:57.836990   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:57.836914   87367 retry.go:31] will retry after 1.149571097s: waiting for machine to come up
	I1104 12:07:58.987694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:58.988125   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:58.988152   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:58.988077   87367 retry.go:31] will retry after 1.247311806s: waiting for machine to come up
	I1104 12:08:00.237634   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:00.238147   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:00.238217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:00.238109   87367 retry.go:31] will retry after 2.058125339s: waiting for machine to come up
	I1104 12:08:00.549003   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.048325   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.548502   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.563976   86301 api_server.go:72] duration metric: took 1.515915725s to wait for apiserver process to appear ...
	I1104 12:08:01.564003   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:01.564021   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.008662   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.008689   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.008701   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.033053   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.033085   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.064261   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.084034   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.084062   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:04.564564   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.570062   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.570090   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.064688   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.069572   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:05.069600   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.564628   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.570537   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:08:05.577335   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:05.577360   86301 api_server.go:131] duration metric: took 4.01335048s to wait for apiserver health ...
	I1104 12:08:05.577371   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:08:05.577379   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:05.578990   86301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:01.824677   85759 pod_ready.go:93] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.824703   85759 pod_ready.go:82] duration metric: took 4.006292816s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.824717   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833386   85759 pod_ready.go:93] pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.833415   85759 pod_ready.go:82] duration metric: took 8.688522ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833428   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839346   85759 pod_ready.go:93] pod "kube-proxy-phzgx" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.839370   85759 pod_ready.go:82] duration metric: took 5.933971ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839379   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844449   85759 pod_ready.go:93] pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.844476   85759 pod_ready.go:82] duration metric: took 5.08969ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844490   85759 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:03.852871   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:02.298631   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:02.299046   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:02.299079   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:02.298978   87367 retry.go:31] will retry after 2.664667046s: waiting for machine to come up
	I1104 12:08:04.964700   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:04.965185   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:04.965209   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:04.965135   87367 retry.go:31] will retry after 2.716802395s: waiting for machine to come up
	I1104 12:08:05.580188   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:05.591930   86301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:05.609969   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:05.621524   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:05.621559   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:05.621579   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:05.621590   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:05.621599   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:05.621609   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:05.621623   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:05.621637   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:05.621646   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:05.621656   86301 system_pods.go:74] duration metric: took 11.668493ms to wait for pod list to return data ...
	I1104 12:08:05.621669   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:05.626555   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:05.626583   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:05.626600   86301 node_conditions.go:105] duration metric: took 4.924748ms to run NodePressure ...
	I1104 12:08:05.626620   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:05.899159   86301 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905004   86301 kubeadm.go:739] kubelet initialised
	I1104 12:08:05.905027   86301 kubeadm.go:740] duration metric: took 5.831926ms waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905035   86301 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:05.910301   86301 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.917517   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917552   86301 pod_ready.go:82] duration metric: took 7.223252ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.917564   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917577   86301 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.924077   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924108   86301 pod_ready.go:82] duration metric: took 6.519268ms for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.924123   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924133   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.929584   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929611   86301 pod_ready.go:82] duration metric: took 5.464108ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.929625   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929640   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.013629   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013655   86301 pod_ready.go:82] duration metric: took 84.003349ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.013666   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013674   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.413337   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413362   86301 pod_ready.go:82] duration metric: took 399.676932ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.413372   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413379   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.813910   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813948   86301 pod_ready.go:82] duration metric: took 400.558541ms for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.813962   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813971   86301 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:07.213603   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213632   86301 pod_ready.go:82] duration metric: took 399.645898ms for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:07.213642   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213650   86301 pod_ready.go:39] duration metric: took 1.308606058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:07.213664   86301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:07.224946   86301 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:07.224972   86301 kubeadm.go:597] duration metric: took 8.580368331s to restartPrimaryControlPlane
	I1104 12:08:07.224984   86301 kubeadm.go:394] duration metric: took 8.624649305s to StartCluster
	I1104 12:08:07.225005   86301 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.225093   86301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:07.226601   86301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.226848   86301 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:07.226980   86301 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:07.227075   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:07.227096   86301 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227115   86301 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:07.227110   86301 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-036892"
	W1104 12:08:07.227128   86301 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:07.227145   86301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-036892"
	I1104 12:08:07.227161   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227082   86301 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227275   86301 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.227291   86301 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:07.227316   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227494   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227529   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227592   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227620   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227634   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227655   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.228583   86301 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:07.229927   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:07.242580   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I1104 12:08:07.243096   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.243659   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.243678   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.243954   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I1104 12:08:07.244058   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.244513   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.244634   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.244679   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245015   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.245035   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.245437   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.245905   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.245942   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245963   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I1104 12:08:07.246281   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.246725   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.246748   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.247084   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.247294   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.250833   86301 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.250857   86301 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:07.250884   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.251243   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.251285   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.261670   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1104 12:08:07.261736   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I1104 12:08:07.262154   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262283   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262803   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262821   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.262916   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262927   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.263218   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263282   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263411   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.263457   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.265067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.265574   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.267307   86301 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:07.267336   86301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:07.268853   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:07.268874   86301 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:07.268895   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.268976   86301 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.268994   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:07.269011   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.271584   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I1104 12:08:07.272047   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.272347   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272377   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272688   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.272707   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.272933   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.272959   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272990   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.273007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.273065   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.273149   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273564   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.273597   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.273765   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273767   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273925   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273966   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274049   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274098   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.274179   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.288474   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I1104 12:08:07.288955   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.289555   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.289580   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.289915   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.290128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.291744   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.291944   86301 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.291958   86301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:07.291972   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.294477   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.294793   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.294824   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.295009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.295178   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.295326   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.295444   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.430295   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:07.461396   86301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:07.523117   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.542339   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:07.542361   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:07.566207   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:07.566232   86301 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:07.580871   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.596309   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:07.596338   86301 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:07.626662   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:08.553268   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553295   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553315   86301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030165078s)
	I1104 12:08:08.553352   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553373   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553656   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553673   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553683   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553739   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553759   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553767   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553780   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553925   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553942   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.554106   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.554138   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.554155   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.559615   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.559635   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.559944   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.559961   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.563833   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.563848   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564636   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564653   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564666   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.564671   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564894   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564906   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564912   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564940   86301 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:08.566838   86301 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:08.568165   86301 addons.go:510] duration metric: took 1.341200959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:09.465405   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.350759   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:08.850563   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:10.851315   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:07.683582   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:07.684143   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:07.684172   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:07.684093   87367 retry.go:31] will retry after 2.880856513s: waiting for machine to come up
	I1104 12:08:10.566197   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.566657   86402 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 12:08:10.566675   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 12:08:10.566687   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.567139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.567166   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 12:08:10.567186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | skip adding static IP to network mk-old-k8s-version-589257 - found existing host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"}
	I1104 12:08:10.567199   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 12:08:10.567213   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 12:08:10.569500   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569816   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.569846   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 12:08:10.570004   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 12:08:10.570025   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:10.570033   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 12:08:10.570041   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 12:08:10.697114   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:10.697552   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 12:08:10.698196   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:10.700982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701369   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.701403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701649   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:08:10.701875   86402 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:10.701898   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:10.702099   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.704605   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.704977   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.705006   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.705151   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.705342   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705486   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705602   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.705703   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.705907   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.705918   86402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:10.813494   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:10.813544   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.813816   86402 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 12:08:10.813847   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.814034   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.816782   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.817245   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817394   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.817589   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817760   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817882   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.818027   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.818227   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.818245   86402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 12:08:10.940779   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 12:08:10.940803   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.943694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944062   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.944090   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944263   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.944452   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944627   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944767   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.944910   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.945093   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.945110   86402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:11.061924   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:11.061966   86402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:11.062007   86402 buildroot.go:174] setting up certificates
	I1104 12:08:11.062021   86402 provision.go:84] configureAuth start
	I1104 12:08:11.062033   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:11.062293   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.065165   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065559   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.065594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065834   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.068257   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068620   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.068646   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068787   86402 provision.go:143] copyHostCerts
	I1104 12:08:11.068842   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:11.068854   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:11.068904   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:11.068993   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:11.069000   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:11.069019   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:11.069072   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:11.069079   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:11.069097   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:11.069191   86402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 12:08:11.271880   86402 provision.go:177] copyRemoteCerts
	I1104 12:08:11.271946   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:11.271988   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.275023   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275396   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.275428   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275701   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.275905   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.276048   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.276182   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.362968   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:11.388401   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 12:08:11.417180   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:11.439810   86402 provision.go:87] duration metric: took 377.778325ms to configureAuth
	I1104 12:08:11.439841   86402 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:11.440043   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:08:11.440110   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.442476   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.442783   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.442818   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.443005   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.443204   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443329   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.443665   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.443822   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.443837   86402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:11.662212   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:11.662241   86402 machine.go:96] duration metric: took 960.351823ms to provisionDockerMachine
	I1104 12:08:11.662256   86402 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 12:08:11.662269   86402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:11.662289   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.662613   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:11.662642   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.665028   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665391   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.665420   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665598   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.665776   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.665942   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.666064   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.889727   85500 start.go:364] duration metric: took 49.147423989s to acquireMachinesLock for "no-preload-908370"
	I1104 12:08:11.889796   85500 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:08:11.889806   85500 fix.go:54] fixHost starting: 
	I1104 12:08:11.890201   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:11.890229   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:11.906978   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I1104 12:08:11.907524   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:11.907916   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:11.907939   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:11.908319   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:11.908518   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:11.908672   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:11.910182   85500 fix.go:112] recreateIfNeeded on no-preload-908370: state=Stopped err=<nil>
	I1104 12:08:11.910224   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	W1104 12:08:11.910353   85500 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:08:11.912457   85500 out.go:177] * Restarting existing kvm2 VM for "no-preload-908370" ...
	I1104 12:08:11.747199   86402 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:11.751253   86402 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:11.751279   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:11.751356   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:11.751465   86402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:11.751591   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:11.760409   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:11.781890   86402 start.go:296] duration metric: took 119.620604ms for postStartSetup
	I1104 12:08:11.781934   86402 fix.go:56] duration metric: took 19.207938878s for fixHost
	I1104 12:08:11.781960   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.784767   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785058   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.785084   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785300   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.785500   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785644   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785750   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.785877   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.786047   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.786059   86402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:11.889540   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722091.863405264
	
	I1104 12:08:11.889568   86402 fix.go:216] guest clock: 1730722091.863405264
	I1104 12:08:11.889578   86402 fix.go:229] Guest: 2024-11-04 12:08:11.863405264 +0000 UTC Remote: 2024-11-04 12:08:11.781939603 +0000 UTC m=+230.132769870 (delta=81.465661ms)
	I1104 12:08:11.889631   86402 fix.go:200] guest clock delta is within tolerance: 81.465661ms
	I1104 12:08:11.889641   86402 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 19.315682928s
	I1104 12:08:11.889677   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.889975   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.892654   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.892982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.893012   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.893212   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893706   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893888   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893989   86402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:11.894031   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.894074   86402 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:11.894094   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.896812   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897020   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897192   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897454   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897478   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897631   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897646   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897778   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897911   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.897989   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.898083   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.898120   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.998704   86402 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:12.004820   86402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:12.148742   86402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:12.155015   86402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:12.155089   86402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:12.171054   86402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:12.171085   86402 start.go:495] detecting cgroup driver to use...
	I1104 12:08:12.171154   86402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:12.189977   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:12.204622   86402 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:12.204679   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:12.218808   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:12.232276   86402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:12.341220   86402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:12.512813   86402 docker.go:233] disabling docker service ...
	I1104 12:08:12.512893   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:12.526784   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:12.539774   86402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:12.666162   86402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:12.788317   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:12.802703   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:12.820915   86402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 12:08:12.820985   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.831311   86402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:12.831400   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.841625   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.852548   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.864683   86402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:12.876794   86402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:12.886878   86402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:12.886943   86402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:12.902476   86402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:12.914565   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:13.044125   86402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:13.149816   86402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:13.149893   86402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:13.154639   86402 start.go:563] Will wait 60s for crictl version
	I1104 12:08:13.154706   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:13.158788   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:13.200038   86402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:13.200117   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.233501   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.264558   86402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 12:08:11.913730   85500 main.go:141] libmachine: (no-preload-908370) Calling .Start
	I1104 12:08:11.913915   85500 main.go:141] libmachine: (no-preload-908370) Ensuring networks are active...
	I1104 12:08:11.914653   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network default is active
	I1104 12:08:11.915111   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network mk-no-preload-908370 is active
	I1104 12:08:11.915575   85500 main.go:141] libmachine: (no-preload-908370) Getting domain xml...
	I1104 12:08:11.916375   85500 main.go:141] libmachine: (no-preload-908370) Creating domain...
	I1104 12:08:13.289793   85500 main.go:141] libmachine: (no-preload-908370) Waiting to get IP...
	I1104 12:08:13.290880   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.291498   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.291631   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.291463   87562 retry.go:31] will retry after 277.090671ms: waiting for machine to come up
	I1104 12:08:13.570141   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.570726   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.570749   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.570623   87562 retry.go:31] will retry after 259.985785ms: waiting for machine to come up
	I1104 12:08:13.832172   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.832855   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.832898   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.832809   87562 retry.go:31] will retry after 473.426945ms: waiting for machine to come up
	I1104 12:08:14.308725   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.309273   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.309302   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.309249   87562 retry.go:31] will retry after 417.466134ms: waiting for machine to come up
	I1104 12:08:14.727927   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.728388   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.728413   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.728366   87562 retry.go:31] will retry after 734.894622ms: waiting for machine to come up
	I1104 12:08:11.465894   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:13.966921   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:14.465523   86301 node_ready.go:49] node "default-k8s-diff-port-036892" has status "Ready":"True"
	I1104 12:08:14.465545   86301 node_ready.go:38] duration metric: took 7.004111382s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:14.465554   86301 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:14.473334   86301 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482486   86301 pod_ready.go:93] pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:14.482508   86301 pod_ready.go:82] duration metric: took 9.145998ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482518   86301 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:13.351753   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:15.851818   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:13.266087   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:13.269660   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270200   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:13.270233   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270520   86402 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:13.274751   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:13.290348   86402 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:13.290483   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:08:13.290547   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:13.340338   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:13.340426   86402 ssh_runner.go:195] Run: which lz4
	I1104 12:08:13.345147   86402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:08:13.349792   86402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:08:13.349872   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 12:08:14.842720   86402 crio.go:462] duration metric: took 1.497615031s to copy over tarball
	I1104 12:08:14.842791   86402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:08:15.464914   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:15.465510   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:15.465541   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:15.465478   87562 retry.go:31] will retry after 578.01955ms: waiting for machine to come up
	I1104 12:08:16.044861   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:16.045354   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:16.045380   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:16.045313   87562 retry.go:31] will retry after 1.136035438s: waiting for machine to come up
	I1104 12:08:17.182829   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:17.183255   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:17.183282   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:17.183233   87562 retry.go:31] will retry after 1.070971462s: waiting for machine to come up
	I1104 12:08:18.255532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:18.256051   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:18.256078   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:18.256007   87562 retry.go:31] will retry after 1.542250267s: waiting for machine to come up
	I1104 12:08:19.800851   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:19.801298   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:19.801324   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:19.801276   87562 retry.go:31] will retry after 2.127250885s: waiting for machine to come up
	I1104 12:08:16.489394   86301 pod_ready.go:103] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:16.994480   86301 pod_ready.go:93] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:16.994502   86301 pod_ready.go:82] duration metric: took 2.511977586s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:16.994512   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502472   86301 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.502499   86301 pod_ready.go:82] duration metric: took 507.979218ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502513   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507763   86301 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.507785   86301 pod_ready.go:82] duration metric: took 5.264185ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507795   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514017   86301 pod_ready.go:93] pod "kube-proxy-j2srm" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.514045   86301 pod_ready.go:82] duration metric: took 6.241799ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514058   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:19.683083   86301 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.049735   86301 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:20.049759   86301 pod_ready.go:82] duration metric: took 2.535691306s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:20.049772   86301 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:18.749494   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.853448   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:17.837381   86402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994557811s)
	I1104 12:08:17.837410   86402 crio.go:469] duration metric: took 2.994665886s to extract the tarball
	I1104 12:08:17.837420   86402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:08:17.882418   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:17.917035   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:17.917064   86402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:17.917195   86402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.917169   86402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.917164   86402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.917150   86402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.917283   86402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.917254   86402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.918943   86402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 12:08:17.919014   86402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.919025   86402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.070119   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.076604   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.078712   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.083777   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.087827   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.092838   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.110359   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 12:08:18.165523   86402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 12:08:18.165569   86402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.165617   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.213723   86402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 12:08:18.213784   86402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.213833   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.252171   86402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 12:08:18.252221   86402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.252270   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256482   86402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 12:08:18.256522   86402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.256567   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256606   86402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 12:08:18.256564   86402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 12:08:18.256631   86402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.256632   86402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.256632   86402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 12:08:18.256690   86402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 12:08:18.256657   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256703   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.256691   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.256738   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256658   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.264837   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.265836   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.349896   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.349935   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.350014   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.350077   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.368533   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.371302   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.371393   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.496042   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.496121   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.509196   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.509339   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.509247   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.509348   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.513943   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.645867   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 12:08:18.649173   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.649276   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.656159   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.656193   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 12:08:18.660309   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 12:08:18.660384   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 12:08:18.719995   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 12:08:18.720033   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 12:08:18.728304   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 12:08:18.867880   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:19.009342   86402 cache_images.go:92] duration metric: took 1.092257593s to LoadCachedImages
	W1104 12:08:19.009448   86402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1104 12:08:19.009469   86402 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 12:08:19.009590   86402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:19.009671   86402 ssh_runner.go:195] Run: crio config
	I1104 12:08:19.054831   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:08:19.054850   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:19.054863   86402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:19.054880   86402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 12:08:19.055049   86402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:19.055125   86402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 12:08:19.065804   86402 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:19.065888   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:19.075491   86402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 12:08:19.092371   86402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:19.108896   86402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 12:08:19.127622   86402 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:19.131597   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:19.145142   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:19.284780   86402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:19.303843   86402 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 12:08:19.303872   86402 certs.go:194] generating shared ca certs ...
	I1104 12:08:19.303894   86402 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.304084   86402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:19.304148   86402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:19.304161   86402 certs.go:256] generating profile certs ...
	I1104 12:08:19.304280   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 12:08:19.304347   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 12:08:19.304401   86402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 12:08:19.304549   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:19.304590   86402 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:19.304608   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:19.304659   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:19.304702   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:19.304729   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:19.304794   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:19.305479   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:19.341333   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:19.375179   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:19.410128   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:19.452565   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 12:08:19.493404   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:08:19.521178   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:19.550524   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:08:19.574903   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:19.599308   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:19.627107   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:19.657121   86402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:19.679087   86402 ssh_runner.go:195] Run: openssl version
	I1104 12:08:19.687115   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:19.702537   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707340   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707408   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.714955   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:19.727883   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:19.739690   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744600   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744656   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.750324   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:19.760988   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:19.772634   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777504   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777580   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.783660   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:19.795483   86402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:19.800327   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:19.806346   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:19.813920   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:19.820358   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:19.826359   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:19.832467   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:19.838902   86402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:19.839018   86402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:19.839075   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.880407   86402 cri.go:89] found id: ""
	I1104 12:08:19.880486   86402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:19.891135   86402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:19.891156   86402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:19.891219   86402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:19.901437   86402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:19.902325   86402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:19.902941   86402 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-589257" cluster setting kubeconfig missing "old-k8s-version-589257" context setting]
	I1104 12:08:19.903879   86402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.937877   86402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:19.948669   86402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.180
	I1104 12:08:19.948701   86402 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:19.948711   86402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:19.948752   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.988249   86402 cri.go:89] found id: ""
	I1104 12:08:19.988344   86402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:20.006949   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:20.020677   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:20.020700   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:20.020747   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:20.031509   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:20.031566   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:20.042229   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:20.054695   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:20.054810   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:20.067410   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.078639   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:20.078711   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.091357   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:20.100986   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:20.101071   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:20.110345   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:20.119778   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:20.281637   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.006838   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.234671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.335720   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.437522   86402 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:21.437615   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:21.929963   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:21.930522   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:21.930552   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:21.930461   87562 retry.go:31] will retry after 2.171964123s: waiting for machine to come up
	I1104 12:08:24.103844   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:24.104303   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:24.104326   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:24.104257   87562 retry.go:31] will retry after 2.838813818s: waiting for machine to come up
	I1104 12:08:22.056858   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:24.057127   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:23.351405   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:25.850834   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:21.938086   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.438198   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.938624   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.438021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.938119   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.438470   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.937687   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.438045   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.937696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.438585   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.944977   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:26.945367   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:26.945395   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:26.945349   87562 retry.go:31] will retry after 2.799785534s: waiting for machine to come up
	I1104 12:08:29.746349   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746747   85500 main.go:141] libmachine: (no-preload-908370) Found IP for machine: 192.168.61.91
	I1104 12:08:29.746774   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has current primary IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746779   85500 main.go:141] libmachine: (no-preload-908370) Reserving static IP address...
	I1104 12:08:29.747195   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.747218   85500 main.go:141] libmachine: (no-preload-908370) Reserved static IP address: 192.168.61.91
	I1104 12:08:29.747234   85500 main.go:141] libmachine: (no-preload-908370) DBG | skip adding static IP to network mk-no-preload-908370 - found existing host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"}
	I1104 12:08:29.747248   85500 main.go:141] libmachine: (no-preload-908370) DBG | Getting to WaitForSSH function...
	I1104 12:08:29.747258   85500 main.go:141] libmachine: (no-preload-908370) Waiting for SSH to be available...
	I1104 12:08:29.749405   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749694   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.749728   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749887   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH client type: external
	I1104 12:08:29.749908   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa (-rw-------)
	I1104 12:08:29.749933   85500 main.go:141] libmachine: (no-preload-908370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:29.749951   85500 main.go:141] libmachine: (no-preload-908370) DBG | About to run SSH command:
	I1104 12:08:29.749966   85500 main.go:141] libmachine: (no-preload-908370) DBG | exit 0
	I1104 12:08:29.873121   85500 main.go:141] libmachine: (no-preload-908370) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:29.873472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetConfigRaw
	I1104 12:08:29.874081   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:29.876737   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877127   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.877155   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877473   85500 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/config.json ...
	I1104 12:08:29.877717   85500 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:29.877740   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:29.877936   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.880272   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880522   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.880543   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.880883   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881048   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.881338   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.881511   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.881524   85500 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:29.989431   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:29.989460   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989725   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:08:29.989757   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989974   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.992679   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993028   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.993057   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993222   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.993425   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993553   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993683   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.993817   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.994000   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.994016   85500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-908370 && echo "no-preload-908370" | sudo tee /etc/hostname
	I1104 12:08:30.118321   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-908370
	
	I1104 12:08:30.118361   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.121095   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121475   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.121509   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121697   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.121866   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122040   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122176   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.122343   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.122525   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.122547   85500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-908370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-908370/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-908370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:26.557368   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:29.056377   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:28.349510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:30.350431   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:26.937831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.938240   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.438463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.937958   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.437676   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.938298   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.937953   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.438075   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.237340   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:30.237370   85500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:30.237413   85500 buildroot.go:174] setting up certificates
	I1104 12:08:30.237429   85500 provision.go:84] configureAuth start
	I1104 12:08:30.237446   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:30.237725   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:30.240026   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240350   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.240380   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.242777   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243101   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.243119   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243302   85500 provision.go:143] copyHostCerts
	I1104 12:08:30.243358   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:30.243368   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:30.243427   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:30.243532   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:30.243542   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:30.243565   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:30.243635   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:30.243643   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:30.243661   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:30.243719   85500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.no-preload-908370 san=[127.0.0.1 192.168.61.91 localhost minikube no-preload-908370]
	I1104 12:08:30.515270   85500 provision.go:177] copyRemoteCerts
	I1104 12:08:30.515350   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:30.515381   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.518651   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519188   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.519218   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519420   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.519600   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.519777   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.519896   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:30.603170   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:30.626226   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:30.649353   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:08:30.684759   85500 provision.go:87] duration metric: took 447.313588ms to configureAuth
	I1104 12:08:30.684789   85500 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:30.684962   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:30.685029   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.687429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.687815   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.687840   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.688015   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.688192   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688325   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688471   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.688640   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.688830   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.688848   85500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:30.919118   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:30.919142   85500 machine.go:96] duration metric: took 1.041410402s to provisionDockerMachine
	I1104 12:08:30.919156   85500 start.go:293] postStartSetup for "no-preload-908370" (driver="kvm2")
	I1104 12:08:30.919169   85500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:30.919200   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:30.919513   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:30.919538   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.922075   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922485   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.922510   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922615   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.922823   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.922991   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.923107   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.007598   85500 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:31.011558   85500 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:31.011588   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:31.011665   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:31.011766   85500 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:31.011859   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:31.020788   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:31.044379   85500 start.go:296] duration metric: took 125.209775ms for postStartSetup
	I1104 12:08:31.044414   85500 fix.go:56] duration metric: took 19.154609071s for fixHost
	I1104 12:08:31.044442   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.047152   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047426   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.047461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047639   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.047829   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.047976   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.048138   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.048296   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:31.048464   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:31.048474   85500 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:31.157723   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722111.115015995
	
	I1104 12:08:31.157747   85500 fix.go:216] guest clock: 1730722111.115015995
	I1104 12:08:31.157758   85500 fix.go:229] Guest: 2024-11-04 12:08:31.115015995 +0000 UTC Remote: 2024-11-04 12:08:31.044427312 +0000 UTC m=+350.890212897 (delta=70.588683ms)
	I1104 12:08:31.157829   85500 fix.go:200] guest clock delta is within tolerance: 70.588683ms
	I1104 12:08:31.157841   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 19.268070408s
	I1104 12:08:31.157875   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.158131   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:31.160806   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161159   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.161191   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161371   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.161907   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162092   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162174   85500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:31.162217   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.162444   85500 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:31.162470   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.165069   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165316   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165505   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165656   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.165771   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165795   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165842   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166006   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.166024   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166183   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.166327   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166449   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.267746   85500 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:31.273307   85500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:31.410198   85500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:31.416652   85500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:31.416726   85500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:31.432260   85500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:31.432288   85500 start.go:495] detecting cgroup driver to use...
	I1104 12:08:31.432345   85500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:31.453134   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:31.467457   85500 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:31.467516   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:31.481392   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:31.495740   85500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:31.617549   85500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:31.802455   85500 docker.go:233] disabling docker service ...
	I1104 12:08:31.802511   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:31.815534   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:31.827495   85500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:31.938344   85500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:32.042827   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:32.056126   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:32.074274   85500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:08:32.074337   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.084061   85500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:32.084138   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.093533   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.104351   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.113753   85500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:32.123391   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.133089   85500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.149073   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.159888   85500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:32.169208   85500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:32.169279   85500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:32.181319   85500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:32.192472   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:32.300710   85500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:32.386906   85500 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:32.386980   85500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:32.391498   85500 start.go:563] Will wait 60s for crictl version
	I1104 12:08:32.391554   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.395471   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:32.439094   85500 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:32.439168   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.466609   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.499305   85500 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:08:32.500825   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:32.503461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.503827   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:32.503857   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.504039   85500 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:32.508082   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:32.520202   85500 kubeadm.go:883] updating cluster {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:32.520359   85500 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:08:32.520402   85500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:32.553752   85500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:08:32.553781   85500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.553868   85500 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.553853   85500 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.553886   85500 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1104 12:08:32.553925   85500 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.553969   85500 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.553978   85500 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555506   85500 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.555518   85500 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.555510   85500 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.555513   85500 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555591   85500 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.555601   85500 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.555514   85500 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.555658   85500 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1104 12:08:32.706982   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.707334   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.712904   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.721917   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.727829   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.741130   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1104 12:08:32.743716   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.796406   85500 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1104 12:08:32.796448   85500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.796502   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.814658   85500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1104 12:08:32.814697   85500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.814735   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.828308   85500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1104 12:08:32.828362   85500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.828416   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.882090   85500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1104 12:08:32.882140   85500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.882205   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.886473   85500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1104 12:08:32.886518   85500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.886567   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956331   85500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1104 12:08:32.956394   85500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.956414   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.956462   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.956427   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.956521   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.956425   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956506   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061683   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.061723   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061752   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.061790   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.061836   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.061893   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168519   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168596   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.187540   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.188933   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.189015   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.199281   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.285086   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1104 12:08:33.285145   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1104 12:08:33.285245   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:33.285247   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.307647   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1104 12:08:33.307769   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:33.307784   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1104 12:08:33.307818   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.307869   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:33.312697   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1104 12:08:33.312808   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:33.314341   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1104 12:08:33.314358   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314396   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314535   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1104 12:08:33.319449   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1104 12:08:33.319604   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1104 12:08:33.356390   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1104 12:08:33.356478   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1104 12:08:33.356569   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:33.512915   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:31.057314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:33.059599   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:32.350656   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:34.352338   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:31.938577   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.438561   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.938188   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.437856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.938433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.438381   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.938164   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.438120   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.937802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.438365   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.736963   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.42254522s)
	I1104 12:08:35.736994   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1104 12:08:35.737014   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737027   85500 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.380435224s)
	I1104 12:08:35.737058   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1104 12:08:35.737063   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737104   85500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.224165247s)
	I1104 12:08:35.737156   85500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1104 12:08:35.737191   85500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.737267   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:37.693026   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.955928101s)
	I1104 12:08:37.693065   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1104 12:08:37.693086   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:37.693047   85500 ssh_runner.go:235] Completed: which crictl: (1.955763498s)
	I1104 12:08:37.693168   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:37.693131   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:39.156860   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.463570619s)
	I1104 12:08:39.156894   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1104 12:08:39.156922   85500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156930   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.463741565s)
	I1104 12:08:39.156980   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156998   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.625930   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.057567   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.850619   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.851157   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:40.852272   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.938295   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.437646   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.438623   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.938662   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.938048   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.438404   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.938494   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.437875   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.701724   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.544718982s)
	I1104 12:08:42.701751   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1104 12:08:42.701771   85500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701810   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701826   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.544784275s)
	I1104 12:08:42.701912   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:44.666599   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.964646885s)
	I1104 12:08:44.666653   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1104 12:08:44.666723   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.964896366s)
	I1104 12:08:44.666744   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1104 12:08:44.666748   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:44.666765   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.666807   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.671475   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1104 12:08:40.556827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:42.557662   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.058481   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:43.351505   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.851360   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:41.938001   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.438702   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.938239   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.438469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.437744   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.938478   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.437757   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.938035   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.438173   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.627407   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.960571593s)
	I1104 12:08:46.627437   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1104 12:08:46.627473   85500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:46.627537   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:47.273537   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1104 12:08:47.273578   85500 cache_images.go:123] Successfully loaded all cached images
	I1104 12:08:47.273583   85500 cache_images.go:92] duration metric: took 14.719789832s to LoadCachedImages
	I1104 12:08:47.273594   85500 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.31.2 crio true true} ...
	I1104 12:08:47.273686   85500 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-908370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:47.273747   85500 ssh_runner.go:195] Run: crio config
	I1104 12:08:47.319888   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:47.319916   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:47.319929   85500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:47.319952   85500 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-908370 NodeName:no-preload-908370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:08:47.320098   85500 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-908370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:47.320185   85500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:08:47.330284   85500 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:47.330352   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:47.340015   85500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1104 12:08:47.356601   85500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:47.371327   85500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1104 12:08:47.387251   85500 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:47.391041   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:47.402283   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:47.527723   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:47.544017   85500 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370 for IP: 192.168.61.91
	I1104 12:08:47.544041   85500 certs.go:194] generating shared ca certs ...
	I1104 12:08:47.544060   85500 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:47.544244   85500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:47.544309   85500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:47.544322   85500 certs.go:256] generating profile certs ...
	I1104 12:08:47.544412   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.key
	I1104 12:08:47.544485   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key.890cb7f7
	I1104 12:08:47.544522   85500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key
	I1104 12:08:47.544626   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:47.544654   85500 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:47.544663   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:47.544685   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:47.544706   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:47.544726   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:47.544774   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:47.545439   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:47.588488   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:47.631341   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:47.666571   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:47.698703   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 12:08:47.725285   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:08:47.748890   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:47.775589   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:08:47.799507   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:47.823383   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:47.847515   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:47.869937   85500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:47.886413   85500 ssh_runner.go:195] Run: openssl version
	I1104 12:08:47.892041   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:47.901942   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906128   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906182   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.911506   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:47.921614   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:47.932358   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936742   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936801   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.942544   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:47.953063   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:47.963293   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967487   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967547   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.972898   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:47.983089   85500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:47.987532   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:47.993296   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:47.999021   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:48.004741   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:48.010227   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:48.015795   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:48.021356   85500 kubeadm.go:392] StartCluster: {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:48.021431   85500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:48.021471   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.057729   85500 cri.go:89] found id: ""
	I1104 12:08:48.057805   85500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:48.067591   85500 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:48.067610   85500 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:48.067663   85500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:48.076604   85500 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:48.077987   85500 kubeconfig.go:125] found "no-preload-908370" server: "https://192.168.61.91:8443"
	I1104 12:08:48.080042   85500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:48.089796   85500 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.91
	I1104 12:08:48.089826   85500 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:48.089838   85500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:48.089886   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.126920   85500 cri.go:89] found id: ""
	I1104 12:08:48.126998   85500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:48.143409   85500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:48.152783   85500 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:48.152809   85500 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:48.152858   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:48.161458   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:48.161542   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:48.170361   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:48.179217   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:48.179272   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:48.187834   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.196025   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:48.196079   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.204809   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:48.213280   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:48.213338   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:48.222672   85500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:48.232374   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:48.328999   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:49.920988   85500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.591954434s)
	I1104 12:08:49.921028   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.121679   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.181412   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:47.558137   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:49.559576   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:48.349974   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:50.350855   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:46.938016   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.438229   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.437950   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.437785   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.438413   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.938514   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.438658   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.253614   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:50.253693   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.754467   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.254553   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.271229   85500 api_server.go:72] duration metric: took 1.017613016s to wait for apiserver process to appear ...
	I1104 12:08:51.271255   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:51.271278   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:51.271794   85500 api_server.go:269] stopped: https://192.168.61.91:8443/healthz: Get "https://192.168.61.91:8443/healthz": dial tcp 192.168.61.91:8443: connect: connection refused
	I1104 12:08:51.771551   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.499268   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:54.499296   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:54.499310   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.617672   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.617699   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:54.771942   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.776588   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.776615   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:52.056678   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.057081   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:55.272332   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.276594   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:55.276621   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:55.771423   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.776881   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:08:55.783842   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:55.783869   85500 api_server.go:131] duration metric: took 4.512606898s to wait for apiserver health ...
	I1104 12:08:55.783877   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:55.783883   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:55.785665   85500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:52.351019   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.850354   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:51.938323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.438464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.937754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.938586   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.438391   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.938546   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.438433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.787083   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:55.801764   85500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:55.828371   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:55.847602   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:55.847653   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:55.847666   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:55.847679   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:55.847695   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:55.847707   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:55.847724   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:55.847733   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:55.847743   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:55.847753   85500 system_pods.go:74] duration metric: took 19.357387ms to wait for pod list to return data ...
	I1104 12:08:55.847762   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:55.856783   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:55.856820   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:55.856834   85500 node_conditions.go:105] duration metric: took 9.065755ms to run NodePressure ...
	I1104 12:08:55.856856   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:56.143012   85500 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148006   85500 kubeadm.go:739] kubelet initialised
	I1104 12:08:56.148026   85500 kubeadm.go:740] duration metric: took 4.987292ms waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148034   85500 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:56.152359   85500 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.156700   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156725   85500 pod_ready.go:82] duration metric: took 4.341093ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.156734   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156741   85500 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.161402   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161431   85500 pod_ready.go:82] duration metric: took 4.681838ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.161440   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161447   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.165738   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165756   85500 pod_ready.go:82] duration metric: took 4.301197ms for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.165764   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165770   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.232568   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232598   85500 pod_ready.go:82] duration metric: took 66.818411ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.232610   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232620   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.633774   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633804   85500 pod_ready.go:82] duration metric: took 401.173552ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.633815   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633824   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.032392   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032419   85500 pod_ready.go:82] duration metric: took 398.58729ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.032431   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032439   85500 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.431940   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431976   85500 pod_ready.go:82] duration metric: took 399.525162ms for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.431987   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431997   85500 pod_ready.go:39] duration metric: took 1.283953089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:57.432014   85500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:57.444821   85500 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:57.444845   85500 kubeadm.go:597] duration metric: took 9.377227288s to restartPrimaryControlPlane
	I1104 12:08:57.444857   85500 kubeadm.go:394] duration metric: took 9.423506415s to StartCluster
	I1104 12:08:57.444879   85500 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.444965   85500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:57.446715   85500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.446981   85500 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:57.447059   85500 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:57.447172   85500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-908370"
	I1104 12:08:57.447193   85500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-908370"
	W1104 12:08:57.447202   85500 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:57.447207   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:57.447237   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447234   85500 addons.go:69] Setting default-storageclass=true in profile "no-preload-908370"
	I1104 12:08:57.447321   85500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-908370"
	I1104 12:08:57.447222   85500 addons.go:69] Setting metrics-server=true in profile "no-preload-908370"
	I1104 12:08:57.447418   85500 addons.go:234] Setting addon metrics-server=true in "no-preload-908370"
	W1104 12:08:57.447431   85500 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:57.447461   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447708   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447792   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447813   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447748   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447896   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447853   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.449013   85500 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:57.450774   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:57.469657   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I1104 12:08:57.470180   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.470801   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.470830   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.471277   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.471873   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.471924   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.485026   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1104 12:08:57.485330   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1104 12:08:57.485604   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.485772   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.486328   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486363   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486442   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486471   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486735   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.486847   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.487059   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.487337   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.487401   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.490138   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I1104 12:08:57.490611   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.490705   85500 addons.go:234] Setting addon default-storageclass=true in "no-preload-908370"
	W1104 12:08:57.490724   85500 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:57.490748   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.491098   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.491140   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.491153   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.491177   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.491549   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.491718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.493600   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.495883   85500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:57.497200   85500 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.497217   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:57.497245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.500402   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.500934   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.500960   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.501276   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.501483   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.501626   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.501775   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.508615   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I1104 12:08:57.509102   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.509582   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.509606   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.509948   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.510115   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.510809   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1104 12:08:57.511134   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.511818   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.511836   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.511868   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.512486   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.513456   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.513500   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.513921   85500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:57.515417   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:57.515434   85500 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:57.515461   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.518867   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519216   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.519241   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519334   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.519523   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.519662   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.520124   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.529448   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I1104 12:08:57.529979   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.530374   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.530389   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.530756   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.530889   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.532430   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.532832   85500 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.532843   85500 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:57.532857   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.535429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535783   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.535809   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535953   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.536148   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.536245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.536388   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.635571   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:57.654984   85500 node_ready.go:35] waiting up to 6m0s for node "no-preload-908370" to be "Ready" ...
	I1104 12:08:57.722564   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.768850   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.791069   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:57.791090   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:57.875966   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:57.875997   85500 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:57.929834   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:57.929867   85500 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:58.017927   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:58.732204   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732235   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732586   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.732614   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.732624   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732635   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732640   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.733045   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.733108   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.733084   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.736737   85500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014142064s)
	I1104 12:08:58.736783   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.736793   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737035   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737077   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.737090   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.737100   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737737   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.737756   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737770   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.740716   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.740735   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.740963   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.740974   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.740985   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987200   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987227   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987657   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.987667   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.987676   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987685   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987708   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987991   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.988006   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.988018   85500 addons.go:475] Verifying addon metrics-server=true in "no-preload-908370"
	I1104 12:08:58.989756   85500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:58.991022   85500 addons.go:510] duration metric: took 1.54397104s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:59.659284   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.057497   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.057767   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.850793   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.852058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.938312   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.437920   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.937779   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.438511   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.938464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.438108   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.438356   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.158318   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:04.658719   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:05.159526   85500 node_ready.go:49] node "no-preload-908370" has status "Ready":"True"
	I1104 12:09:05.159553   85500 node_ready.go:38] duration metric: took 7.504528904s for node "no-preload-908370" to be "Ready" ...
	I1104 12:09:05.159564   85500 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:09:05.164838   85500 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173888   85500 pod_ready.go:93] pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.173909   85500 pod_ready.go:82] duration metric: took 9.046581ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173919   85500 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:00.556225   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:02.556893   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:05.055827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.351472   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:03.851990   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.938694   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.938445   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.438137   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.937941   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.937760   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.438704   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.680754   85500 pod_ready.go:93] pod "etcd-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.680778   85500 pod_ready.go:82] duration metric: took 506.849735ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.680804   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:07.687108   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:09.687377   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:07.556024   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.055613   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.351230   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:08.351640   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.850364   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.937956   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.438323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.438437   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.937675   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.437868   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.938703   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.438436   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.687315   85500 pod_ready.go:93] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.687338   85500 pod_ready.go:82] duration metric: took 5.006527478s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.687348   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692554   85500 pod_ready.go:93] pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.692583   85500 pod_ready.go:82] duration metric: took 5.227048ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692597   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697109   85500 pod_ready.go:93] pod "kube-proxy-w9hbz" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.697132   85500 pod_ready.go:82] duration metric: took 4.525205ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697153   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701450   85500 pod_ready.go:93] pod "kube-scheduler-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.701472   85500 pod_ready.go:82] duration metric: took 4.310973ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701483   85500 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:12.708631   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.708772   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.056161   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.556380   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.850721   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.851608   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:11.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.437963   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.938515   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.437754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.937856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.438729   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.938439   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.438421   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.938044   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.438456   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.209025   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.707595   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.056226   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.555918   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.350266   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.350329   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:16.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.438266   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.938153   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.437829   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.938469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.438336   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.938284   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.438073   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.937894   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:21.438135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:21.438238   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:21.471463   86402 cri.go:89] found id: ""
	I1104 12:09:21.471495   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.471507   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:21.471515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:21.471568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:21.509336   86402 cri.go:89] found id: ""
	I1104 12:09:21.509363   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.509373   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:21.509381   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:21.509441   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:21.545963   86402 cri.go:89] found id: ""
	I1104 12:09:21.545987   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.545995   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:21.546000   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:21.546059   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:21.580707   86402 cri.go:89] found id: ""
	I1104 12:09:21.580737   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.580748   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:21.580755   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:21.580820   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:21.613763   86402 cri.go:89] found id: ""
	I1104 12:09:21.613791   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.613801   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:21.613809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:21.613872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:21.646559   86402 cri.go:89] found id: ""
	I1104 12:09:21.646583   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.646591   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:21.646597   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:21.646643   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:21.681439   86402 cri.go:89] found id: ""
	I1104 12:09:21.681467   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.681479   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:21.681486   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:21.681554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:21.708045   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.207686   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:22.055637   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.056458   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.350636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:23.850852   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.713875   86402 cri.go:89] found id: ""
	I1104 12:09:21.713899   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.713907   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:21.713915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:21.713925   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:21.763882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:21.763918   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:21.778590   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:21.778615   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:21.892208   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:21.892235   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:21.892250   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:21.965946   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:21.965984   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:24.502992   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:24.514899   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:24.514960   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:24.554466   86402 cri.go:89] found id: ""
	I1104 12:09:24.554491   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.554501   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:24.554510   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:24.554567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:24.591532   86402 cri.go:89] found id: ""
	I1104 12:09:24.591560   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.591572   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:24.591580   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:24.591638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:24.625436   86402 cri.go:89] found id: ""
	I1104 12:09:24.625467   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.625478   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:24.625485   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:24.625544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:24.658317   86402 cri.go:89] found id: ""
	I1104 12:09:24.658346   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.658357   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:24.658364   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:24.658426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:24.692811   86402 cri.go:89] found id: ""
	I1104 12:09:24.692839   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.692850   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:24.692857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:24.692917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:24.729677   86402 cri.go:89] found id: ""
	I1104 12:09:24.729708   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.729719   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:24.729726   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:24.729773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:24.768575   86402 cri.go:89] found id: ""
	I1104 12:09:24.768598   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.768608   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:24.768615   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:24.768681   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:24.802344   86402 cri.go:89] found id: ""
	I1104 12:09:24.802368   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.802375   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:24.802383   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:24.802394   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:24.855882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:24.855915   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:24.869199   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:24.869243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:24.940720   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:24.940744   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:24.940758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:25.016139   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:25.016177   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:26.208422   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.208568   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.557513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:29.055769   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.350171   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.353001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:30.851153   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:27.553297   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:27.566857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:27.566913   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:27.599606   86402 cri.go:89] found id: ""
	I1104 12:09:27.599641   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.599653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:27.599661   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:27.599721   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:27.633818   86402 cri.go:89] found id: ""
	I1104 12:09:27.633841   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.633849   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:27.633854   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:27.633907   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:27.668088   86402 cri.go:89] found id: ""
	I1104 12:09:27.668120   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.668129   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:27.668135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:27.668185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:27.699401   86402 cri.go:89] found id: ""
	I1104 12:09:27.699433   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.699445   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:27.699453   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:27.699511   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:27.731422   86402 cri.go:89] found id: ""
	I1104 12:09:27.731448   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.731459   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:27.731466   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:27.731528   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:27.762808   86402 cri.go:89] found id: ""
	I1104 12:09:27.762839   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.762850   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:27.762857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:27.762917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:27.794729   86402 cri.go:89] found id: ""
	I1104 12:09:27.794757   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.794765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:27.794771   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:27.794826   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:27.825694   86402 cri.go:89] found id: ""
	I1104 12:09:27.825716   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.825724   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:27.825731   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:27.825742   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.862111   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:27.862140   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:27.911169   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:27.911204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:27.924207   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:27.924232   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:27.995123   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:27.995153   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:27.995167   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:30.580831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:30.594901   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:30.594959   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:30.630936   86402 cri.go:89] found id: ""
	I1104 12:09:30.630961   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.630971   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:30.630979   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:30.631034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:30.669288   86402 cri.go:89] found id: ""
	I1104 12:09:30.669311   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.669320   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:30.669328   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:30.669388   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:30.706288   86402 cri.go:89] found id: ""
	I1104 12:09:30.706312   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.706319   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:30.706325   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:30.706384   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:30.739027   86402 cri.go:89] found id: ""
	I1104 12:09:30.739057   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.739069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:30.739078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:30.739137   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:30.772247   86402 cri.go:89] found id: ""
	I1104 12:09:30.772272   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.772280   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:30.772286   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:30.772338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:30.810327   86402 cri.go:89] found id: ""
	I1104 12:09:30.810360   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.810370   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:30.810375   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:30.810426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:30.842241   86402 cri.go:89] found id: ""
	I1104 12:09:30.842271   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.842279   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:30.842285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:30.842332   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:30.877003   86402 cri.go:89] found id: ""
	I1104 12:09:30.877032   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.877043   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:30.877052   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:30.877077   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:30.925783   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:30.925816   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:30.939651   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:30.939680   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:31.029176   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:31.029210   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:31.029244   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:31.116311   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:31.116348   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:30.708451   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:32.708661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:31.056627   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.056743   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.057986   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.350420   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.351206   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.653267   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:33.665813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:33.665878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:33.701812   86402 cri.go:89] found id: ""
	I1104 12:09:33.701839   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.701852   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:33.701860   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:33.701922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:33.738816   86402 cri.go:89] found id: ""
	I1104 12:09:33.738850   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.738861   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:33.738868   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:33.738928   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:33.773936   86402 cri.go:89] found id: ""
	I1104 12:09:33.773960   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.773968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:33.773976   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:33.774031   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:33.808049   86402 cri.go:89] found id: ""
	I1104 12:09:33.808079   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.808091   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:33.808098   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:33.808154   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:33.844276   86402 cri.go:89] found id: ""
	I1104 12:09:33.844303   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.844314   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:33.844322   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:33.844443   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:33.879736   86402 cri.go:89] found id: ""
	I1104 12:09:33.879772   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.879782   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:33.879788   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:33.879843   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:33.913717   86402 cri.go:89] found id: ""
	I1104 12:09:33.913750   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.913761   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:33.913769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:33.913832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:33.949632   86402 cri.go:89] found id: ""
	I1104 12:09:33.949658   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.949667   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:33.949677   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:33.949691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:34.019770   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:34.019790   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:34.019806   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:34.101493   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:34.101524   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:34.146723   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:34.146751   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:34.196295   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:34.196338   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:35.207223   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.207576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.208091   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.556228   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.556548   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.850907   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.852870   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:36.709951   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:36.724723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:36.724782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:36.777406   86402 cri.go:89] found id: ""
	I1104 12:09:36.777440   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.777451   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:36.777459   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:36.777520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:36.834486   86402 cri.go:89] found id: ""
	I1104 12:09:36.834516   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.834527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:36.834535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:36.834641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:36.868828   86402 cri.go:89] found id: ""
	I1104 12:09:36.868853   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.868861   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:36.868867   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:36.868912   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:36.900942   86402 cri.go:89] found id: ""
	I1104 12:09:36.900972   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.900980   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:36.900986   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:36.901043   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:36.933215   86402 cri.go:89] found id: ""
	I1104 12:09:36.933265   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.933276   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:36.933282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:36.933330   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:36.966753   86402 cri.go:89] found id: ""
	I1104 12:09:36.966776   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.966784   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:36.966789   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:36.966850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:37.000050   86402 cri.go:89] found id: ""
	I1104 12:09:37.000074   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.000082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:37.000087   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:37.000144   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:37.033252   86402 cri.go:89] found id: ""
	I1104 12:09:37.033283   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.033295   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:37.033305   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:37.033328   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:37.085351   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:37.085383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:37.098556   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:37.098582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:37.167489   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:37.167512   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:37.167525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:37.243292   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:37.243325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:39.781468   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:39.795630   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:39.795756   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:39.833745   86402 cri.go:89] found id: ""
	I1104 12:09:39.833779   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.833791   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:39.833798   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:39.833862   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:39.870075   86402 cri.go:89] found id: ""
	I1104 12:09:39.870096   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.870106   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:39.870119   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:39.870173   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:39.905807   86402 cri.go:89] found id: ""
	I1104 12:09:39.905836   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.905846   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:39.905854   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:39.905916   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:39.941890   86402 cri.go:89] found id: ""
	I1104 12:09:39.941914   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.941922   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:39.941932   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:39.941978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:39.979123   86402 cri.go:89] found id: ""
	I1104 12:09:39.979150   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.979159   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:39.979165   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:39.979220   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:40.014748   86402 cri.go:89] found id: ""
	I1104 12:09:40.014777   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.014785   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:40.014791   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:40.014882   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:40.049977   86402 cri.go:89] found id: ""
	I1104 12:09:40.050004   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.050014   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:40.050021   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:40.050100   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:40.085630   86402 cri.go:89] found id: ""
	I1104 12:09:40.085663   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.085674   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:40.085685   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:40.085701   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:40.166611   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:40.166650   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:40.203117   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:40.203155   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:40.256233   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:40.256267   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:40.270009   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:40.270042   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:40.338672   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:41.707618   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:43.708915   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.055555   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.060949   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.351562   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.851599   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.839402   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:42.852881   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:42.852947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:42.884587   86402 cri.go:89] found id: ""
	I1104 12:09:42.884614   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.884624   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:42.884631   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:42.884690   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:42.915286   86402 cri.go:89] found id: ""
	I1104 12:09:42.915316   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.915327   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:42.915337   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:42.915399   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:42.945827   86402 cri.go:89] found id: ""
	I1104 12:09:42.945857   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.945868   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:42.945875   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:42.945934   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:42.982662   86402 cri.go:89] found id: ""
	I1104 12:09:42.982693   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.982703   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:42.982712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:42.982788   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:43.015337   86402 cri.go:89] found id: ""
	I1104 12:09:43.015371   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.015382   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:43.015390   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:43.015453   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:43.048235   86402 cri.go:89] found id: ""
	I1104 12:09:43.048262   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.048270   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:43.048276   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:43.048351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:43.080636   86402 cri.go:89] found id: ""
	I1104 12:09:43.080668   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.080679   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:43.080687   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:43.080746   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:43.113986   86402 cri.go:89] found id: ""
	I1104 12:09:43.114011   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.114019   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:43.114027   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:43.114038   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:43.165356   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:43.165390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:43.179167   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:43.179200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:43.250054   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:43.250083   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:43.250098   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:43.328970   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:43.329002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:45.869879   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:45.883262   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:45.883359   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:45.921978   86402 cri.go:89] found id: ""
	I1104 12:09:45.922003   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.922011   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:45.922016   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:45.922076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:45.954668   86402 cri.go:89] found id: ""
	I1104 12:09:45.954697   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.954710   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:45.954717   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:45.954787   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:45.987793   86402 cri.go:89] found id: ""
	I1104 12:09:45.987826   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.987837   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:45.987845   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:45.987906   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:46.028517   86402 cri.go:89] found id: ""
	I1104 12:09:46.028550   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.028558   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:46.028563   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:46.028621   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:46.063832   86402 cri.go:89] found id: ""
	I1104 12:09:46.063859   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.063870   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:46.063878   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:46.063942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:46.099981   86402 cri.go:89] found id: ""
	I1104 12:09:46.100011   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.100027   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:46.100036   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:46.100169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:46.133060   86402 cri.go:89] found id: ""
	I1104 12:09:46.133083   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.133092   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:46.133099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:46.133165   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:46.170559   86402 cri.go:89] found id: ""
	I1104 12:09:46.170583   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.170591   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:46.170599   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:46.170610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:46.253202   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:46.253253   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:46.288468   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:46.288498   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:46.339322   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:46.339354   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:46.353020   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:46.353049   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:46.420328   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:46.208695   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:46.556598   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.057461   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:47.351225   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.352737   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.920709   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:48.933443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:48.933507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:48.964736   86402 cri.go:89] found id: ""
	I1104 12:09:48.964759   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.964770   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:48.964777   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:48.964837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:48.996646   86402 cri.go:89] found id: ""
	I1104 12:09:48.996670   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.996679   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:48.996684   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:48.996734   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:49.028899   86402 cri.go:89] found id: ""
	I1104 12:09:49.028942   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.028951   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:49.028957   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:49.029015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:49.065032   86402 cri.go:89] found id: ""
	I1104 12:09:49.065056   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.065064   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:49.065075   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:49.065120   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:49.097159   86402 cri.go:89] found id: ""
	I1104 12:09:49.097183   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.097191   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:49.097196   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:49.097269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:49.131578   86402 cri.go:89] found id: ""
	I1104 12:09:49.131608   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.131619   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:49.131626   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:49.131684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:49.164307   86402 cri.go:89] found id: ""
	I1104 12:09:49.164339   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.164358   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:49.164367   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:49.164430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:49.197171   86402 cri.go:89] found id: ""
	I1104 12:09:49.197199   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.197210   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:49.197220   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:49.197251   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:49.210327   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:49.210355   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:49.280226   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:49.280251   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:49.280262   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:49.367655   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:49.367691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:49.408424   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:49.408452   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:50.708963   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:53.207337   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.555800   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.055622   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.850949   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.350551   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.958148   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:51.970451   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:51.970521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:52.000896   86402 cri.go:89] found id: ""
	I1104 12:09:52.000929   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.000940   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:52.000948   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:52.001023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:52.034122   86402 cri.go:89] found id: ""
	I1104 12:09:52.034150   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.034161   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:52.034168   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:52.034227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:52.070834   86402 cri.go:89] found id: ""
	I1104 12:09:52.070872   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.070884   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:52.070891   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:52.070950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:52.103730   86402 cri.go:89] found id: ""
	I1104 12:09:52.103758   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.103766   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:52.103772   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:52.103832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:52.135980   86402 cri.go:89] found id: ""
	I1104 12:09:52.136006   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.136014   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:52.136020   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:52.136081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:52.168903   86402 cri.go:89] found id: ""
	I1104 12:09:52.168928   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.168936   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:52.168942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:52.169001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:52.199499   86402 cri.go:89] found id: ""
	I1104 12:09:52.199529   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.199539   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:52.199546   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:52.199610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:52.232566   86402 cri.go:89] found id: ""
	I1104 12:09:52.232603   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.232615   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:52.232626   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:52.232640   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:52.282140   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:52.282180   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:52.295079   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:52.295110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:52.364061   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:52.364087   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:52.364102   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:52.437868   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:52.437901   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:54.978182   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:54.991002   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:54.991068   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:55.023628   86402 cri.go:89] found id: ""
	I1104 12:09:55.023656   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.023663   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:55.023669   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:55.023715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:55.058524   86402 cri.go:89] found id: ""
	I1104 12:09:55.058548   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.058557   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:55.058564   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:55.058634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:55.095730   86402 cri.go:89] found id: ""
	I1104 12:09:55.095760   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.095772   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:55.095779   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:55.095837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:55.128341   86402 cri.go:89] found id: ""
	I1104 12:09:55.128365   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.128373   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:55.128379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:55.128438   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:55.160655   86402 cri.go:89] found id: ""
	I1104 12:09:55.160681   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.160693   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:55.160700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:55.160754   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:55.194050   86402 cri.go:89] found id: ""
	I1104 12:09:55.194077   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.194086   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:55.194091   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:55.194138   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:55.227655   86402 cri.go:89] found id: ""
	I1104 12:09:55.227694   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.227705   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:55.227712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:55.227810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:55.261106   86402 cri.go:89] found id: ""
	I1104 12:09:55.261137   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.261147   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:55.261157   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:55.261171   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:55.335577   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:55.335598   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:55.335610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:55.421339   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:55.421375   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:55.459936   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:55.459967   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:55.509346   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:55.509382   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:55.208869   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:57.707576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:59.708019   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.555996   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.556335   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.851071   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.851254   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.023608   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:58.036540   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:58.036599   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:58.075104   86402 cri.go:89] found id: ""
	I1104 12:09:58.075182   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.075198   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:58.075207   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:58.075271   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:58.109910   86402 cri.go:89] found id: ""
	I1104 12:09:58.109949   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.109961   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:58.109968   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:58.110038   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:58.142829   86402 cri.go:89] found id: ""
	I1104 12:09:58.142854   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.142865   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:58.142873   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:58.142924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:58.178125   86402 cri.go:89] found id: ""
	I1104 12:09:58.178153   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.178161   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:58.178168   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:58.178239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:58.214117   86402 cri.go:89] found id: ""
	I1104 12:09:58.214146   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.214156   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:58.214162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:58.214213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:58.244728   86402 cri.go:89] found id: ""
	I1104 12:09:58.244751   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.244759   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:58.244765   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:58.244809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:58.275542   86402 cri.go:89] found id: ""
	I1104 12:09:58.275568   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.275576   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:58.275582   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:58.275630   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:58.314909   86402 cri.go:89] found id: ""
	I1104 12:09:58.314935   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.314943   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:58.314952   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:58.314962   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:58.364361   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:58.364390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.378483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:58.378517   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:58.442012   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:58.442033   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:58.442045   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:58.517260   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:58.517298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.057203   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:01.069937   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:01.070008   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:01.101672   86402 cri.go:89] found id: ""
	I1104 12:10:01.101698   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.101709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:01.101716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:01.101779   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:01.134672   86402 cri.go:89] found id: ""
	I1104 12:10:01.134701   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.134712   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:01.134719   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:01.134789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:01.167784   86402 cri.go:89] found id: ""
	I1104 12:10:01.167833   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.167845   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:01.167853   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:01.167945   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:01.201218   86402 cri.go:89] found id: ""
	I1104 12:10:01.201260   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.201271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:01.201281   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:01.201338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:01.234964   86402 cri.go:89] found id: ""
	I1104 12:10:01.234991   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.235000   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:01.235007   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:01.235069   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:01.267809   86402 cri.go:89] found id: ""
	I1104 12:10:01.267848   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.267881   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:01.267890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:01.267942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:01.303567   86402 cri.go:89] found id: ""
	I1104 12:10:01.303590   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.303598   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:01.303604   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:01.303648   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:01.342059   86402 cri.go:89] found id: ""
	I1104 12:10:01.342088   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.342099   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:01.342109   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:01.342142   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:01.354845   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:01.354867   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:01.423426   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:01.423447   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:01.423459   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:01.498979   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:01.499018   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.537658   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:01.537691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:02.208192   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.209058   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.055266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.056457   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.350820   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.850435   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.088653   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:04.103506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:04.103576   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:04.137574   86402 cri.go:89] found id: ""
	I1104 12:10:04.137602   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.137612   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:04.137620   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:04.137684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:04.177624   86402 cri.go:89] found id: ""
	I1104 12:10:04.177662   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.177673   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:04.177681   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:04.177750   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:04.213829   86402 cri.go:89] found id: ""
	I1104 12:10:04.213850   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.213862   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:04.213870   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:04.213929   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:04.251112   86402 cri.go:89] found id: ""
	I1104 12:10:04.251143   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.251154   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:04.251162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:04.251227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:04.286005   86402 cri.go:89] found id: ""
	I1104 12:10:04.286036   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.286046   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:04.286053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:04.286118   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:04.317628   86402 cri.go:89] found id: ""
	I1104 12:10:04.317656   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.317667   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:04.317674   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:04.317742   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:04.351663   86402 cri.go:89] found id: ""
	I1104 12:10:04.351687   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.351695   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:04.351700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:04.351755   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:04.385818   86402 cri.go:89] found id: ""
	I1104 12:10:04.385842   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.385850   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:04.385858   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:04.385880   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:04.467141   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:04.467179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:04.503669   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:04.503700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.557237   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:04.557303   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:04.570484   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:04.570520   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:04.635099   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:06.708483   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:09.207171   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:05.556612   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.056976   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:06.350422   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.351537   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.351962   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:07.135741   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:07.148039   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:07.148132   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:07.185171   86402 cri.go:89] found id: ""
	I1104 12:10:07.185196   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.185205   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:07.185211   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:07.185280   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:07.217097   86402 cri.go:89] found id: ""
	I1104 12:10:07.217126   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.217137   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:07.217144   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:07.217204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:07.250079   86402 cri.go:89] found id: ""
	I1104 12:10:07.250108   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.250116   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:07.250121   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:07.250169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:07.283423   86402 cri.go:89] found id: ""
	I1104 12:10:07.283463   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.283475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:07.283482   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:07.283554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:07.316461   86402 cri.go:89] found id: ""
	I1104 12:10:07.316490   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.316507   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:07.316513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:07.316569   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:07.361981   86402 cri.go:89] found id: ""
	I1104 12:10:07.362010   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.362018   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:07.362024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:07.362087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:07.397834   86402 cri.go:89] found id: ""
	I1104 12:10:07.397867   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.397878   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:07.397886   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:07.397948   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:07.429379   86402 cri.go:89] found id: ""
	I1104 12:10:07.429407   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.429416   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:07.429425   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:07.429438   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:07.495294   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.495322   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:07.495334   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:07.578504   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:07.578546   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:07.617172   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:07.617201   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:07.667168   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:07.667204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.181802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:10.196017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:10.196084   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:10.228243   86402 cri.go:89] found id: ""
	I1104 12:10:10.228272   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.228282   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:10.228289   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:10.228347   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:10.262110   86402 cri.go:89] found id: ""
	I1104 12:10:10.262143   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.262152   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:10.262161   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:10.262218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:10.297776   86402 cri.go:89] found id: ""
	I1104 12:10:10.297812   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.297823   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:10.297830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:10.297877   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:10.332645   86402 cri.go:89] found id: ""
	I1104 12:10:10.332672   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.332680   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:10.332685   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:10.332730   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:10.366703   86402 cri.go:89] found id: ""
	I1104 12:10:10.366735   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.366746   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:10.366754   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:10.366809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:10.399500   86402 cri.go:89] found id: ""
	I1104 12:10:10.399526   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.399534   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:10.399539   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:10.399634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:10.434898   86402 cri.go:89] found id: ""
	I1104 12:10:10.434932   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.434943   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:10.434951   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:10.435022   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:10.472159   86402 cri.go:89] found id: ""
	I1104 12:10:10.472189   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.472201   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:10.472225   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:10.472246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:10.528710   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:10.528769   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.541943   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:10.541973   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:10.621819   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:10.621843   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:10.621855   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:10.698301   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:10.698335   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:11.208069   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.707594   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.556520   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.056160   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:15.056984   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:12.851001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:14.851591   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.235151   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:13.247511   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:13.247585   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:13.278546   86402 cri.go:89] found id: ""
	I1104 12:10:13.278576   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.278586   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:13.278592   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:13.278655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:13.310297   86402 cri.go:89] found id: ""
	I1104 12:10:13.310325   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.310334   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:13.310340   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:13.310394   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:13.344110   86402 cri.go:89] found id: ""
	I1104 12:10:13.344139   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.344150   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:13.344158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:13.344210   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:13.379778   86402 cri.go:89] found id: ""
	I1104 12:10:13.379806   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.379817   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:13.379824   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:13.379872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:13.411763   86402 cri.go:89] found id: ""
	I1104 12:10:13.411795   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.411806   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:13.411813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:13.411872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:13.445192   86402 cri.go:89] found id: ""
	I1104 12:10:13.445217   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.445235   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:13.445243   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:13.445297   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:13.478518   86402 cri.go:89] found id: ""
	I1104 12:10:13.478549   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.478561   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:13.478569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:13.478710   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:13.513852   86402 cri.go:89] found id: ""
	I1104 12:10:13.513878   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.513886   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:13.513895   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:13.513909   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:13.590413   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:13.590439   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:13.590454   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:13.664575   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:13.664608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.700616   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:13.700644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:13.751113   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:13.751147   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:16.264311   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:16.277443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:16.277508   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:16.309983   86402 cri.go:89] found id: ""
	I1104 12:10:16.310010   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.310020   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:16.310025   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:16.310073   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:16.358281   86402 cri.go:89] found id: ""
	I1104 12:10:16.358305   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.358312   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:16.358317   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:16.358376   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:16.394455   86402 cri.go:89] found id: ""
	I1104 12:10:16.394485   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.394497   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:16.394503   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:16.394571   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:16.430606   86402 cri.go:89] found id: ""
	I1104 12:10:16.430638   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.430648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:16.430655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:16.430716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:16.464402   86402 cri.go:89] found id: ""
	I1104 12:10:16.464439   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.464450   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:16.464458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:16.464517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:16.497985   86402 cri.go:89] found id: ""
	I1104 12:10:16.498009   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.498017   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:16.498022   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:16.498076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:16.531255   86402 cri.go:89] found id: ""
	I1104 12:10:16.531289   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.531301   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:16.531309   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:16.531372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:16.566176   86402 cri.go:89] found id: ""
	I1104 12:10:16.566204   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.566213   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:16.566228   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:16.566243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:16.634157   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:16.634196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:16.634218   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:16.206939   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:18.208360   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.555513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.556105   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.351026   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.351294   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:16.710518   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:16.710550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:16.746572   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:16.746608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:16.797146   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:16.797179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.310286   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:19.323409   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:19.323473   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:19.360864   86402 cri.go:89] found id: ""
	I1104 12:10:19.360893   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.360902   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:19.360907   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:19.360962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:19.400127   86402 cri.go:89] found id: ""
	I1104 12:10:19.400155   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.400167   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:19.400174   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:19.400230   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:19.433023   86402 cri.go:89] found id: ""
	I1104 12:10:19.433049   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.433057   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:19.433062   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:19.433123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:19.467786   86402 cri.go:89] found id: ""
	I1104 12:10:19.467810   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.467819   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:19.467825   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:19.467875   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:19.498411   86402 cri.go:89] found id: ""
	I1104 12:10:19.498436   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.498444   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:19.498455   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:19.498502   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:19.532146   86402 cri.go:89] found id: ""
	I1104 12:10:19.532171   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.532179   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:19.532184   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:19.532234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:19.567271   86402 cri.go:89] found id: ""
	I1104 12:10:19.567294   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.567302   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:19.567308   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:19.567369   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:19.608233   86402 cri.go:89] found id: ""
	I1104 12:10:19.608265   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.608279   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:19.608289   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:19.608304   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:19.649039   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:19.649071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:19.702129   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:19.702168   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.716749   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:19.716776   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:19.787538   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:19.787560   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:19.787572   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:20.208694   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.708289   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.556715   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.557173   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.851010   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.852944   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.368982   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:22.382889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:22.382962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:22.418672   86402 cri.go:89] found id: ""
	I1104 12:10:22.418698   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.418709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:22.418716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:22.418782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:22.451675   86402 cri.go:89] found id: ""
	I1104 12:10:22.451704   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.451715   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:22.451723   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:22.451785   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:22.488520   86402 cri.go:89] found id: ""
	I1104 12:10:22.488549   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.488561   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:22.488567   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:22.488631   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:22.530288   86402 cri.go:89] found id: ""
	I1104 12:10:22.530312   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.530321   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:22.530326   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:22.530382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:22.564929   86402 cri.go:89] found id: ""
	I1104 12:10:22.564958   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.564970   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:22.564977   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:22.565036   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:22.598015   86402 cri.go:89] found id: ""
	I1104 12:10:22.598042   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.598051   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:22.598056   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:22.598160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:22.632894   86402 cri.go:89] found id: ""
	I1104 12:10:22.632921   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.632930   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:22.632935   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:22.633001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:22.665194   86402 cri.go:89] found id: ""
	I1104 12:10:22.665218   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.665245   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:22.665257   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:22.665272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:22.717731   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:22.717763   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:22.732671   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:22.732698   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:22.823908   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:22.823946   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:22.823963   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.907812   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:22.907848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.449308   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:25.461694   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:25.461751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:25.493036   86402 cri.go:89] found id: ""
	I1104 12:10:25.493061   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.493068   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:25.493075   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:25.493122   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:25.525084   86402 cri.go:89] found id: ""
	I1104 12:10:25.525116   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.525128   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:25.525135   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:25.525196   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:25.561380   86402 cri.go:89] found id: ""
	I1104 12:10:25.561424   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.561436   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:25.561444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:25.561499   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:25.595429   86402 cri.go:89] found id: ""
	I1104 12:10:25.595453   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.595468   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:25.595474   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:25.595521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:25.627409   86402 cri.go:89] found id: ""
	I1104 12:10:25.627436   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.627445   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:25.627450   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:25.627497   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:25.661048   86402 cri.go:89] found id: ""
	I1104 12:10:25.661073   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.661082   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:25.661088   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:25.661135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:25.698882   86402 cri.go:89] found id: ""
	I1104 12:10:25.698912   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.698920   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:25.698926   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:25.698978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:25.733355   86402 cri.go:89] found id: ""
	I1104 12:10:25.733397   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.733409   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:25.733420   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:25.733435   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:25.784871   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:25.784908   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:25.798715   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:25.798740   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:25.870362   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:25.870383   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:25.870397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:25.950565   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:25.950598   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.209496   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:27.706991   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:29.708209   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.055597   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.055845   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:30.056584   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.351027   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.851204   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.488258   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:28.506058   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:28.506114   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:28.566325   86402 cri.go:89] found id: ""
	I1104 12:10:28.566351   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.566358   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:28.566364   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:28.566413   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:28.612753   86402 cri.go:89] found id: ""
	I1104 12:10:28.612781   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.612790   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:28.612796   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:28.612854   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:28.647082   86402 cri.go:89] found id: ""
	I1104 12:10:28.647109   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.647120   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:28.647128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:28.647205   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:28.683197   86402 cri.go:89] found id: ""
	I1104 12:10:28.683227   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.683239   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:28.683247   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:28.683299   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:28.718139   86402 cri.go:89] found id: ""
	I1104 12:10:28.718175   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.718186   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:28.718194   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:28.718253   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:28.749689   86402 cri.go:89] found id: ""
	I1104 12:10:28.749721   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.749732   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:28.749739   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:28.749803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:28.786824   86402 cri.go:89] found id: ""
	I1104 12:10:28.786851   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.786859   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:28.786864   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:28.786925   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:28.822833   86402 cri.go:89] found id: ""
	I1104 12:10:28.822856   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.822865   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:28.822872   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:28.822884   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:28.835267   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:28.835298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:28.900051   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:28.900076   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:28.900089   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:28.979867   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:28.979912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:29.017294   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:29.017327   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:31.569559   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:31.582065   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:31.582136   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:31.614924   86402 cri.go:89] found id: ""
	I1104 12:10:31.614952   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.614960   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:31.614966   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:31.615029   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:31.647178   86402 cri.go:89] found id: ""
	I1104 12:10:31.647204   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.647212   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:31.647218   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:31.647277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:31.678723   86402 cri.go:89] found id: ""
	I1104 12:10:31.678749   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.678761   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:31.678769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:31.678819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:31.709787   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.208234   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:32.555978   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.557026   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.351700   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:33.850976   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:35.851636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.713013   86402 cri.go:89] found id: ""
	I1104 12:10:31.713036   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.713043   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:31.713048   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:31.713092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:31.746564   86402 cri.go:89] found id: ""
	I1104 12:10:31.746591   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.746600   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:31.746605   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:31.746658   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:31.779559   86402 cri.go:89] found id: ""
	I1104 12:10:31.779586   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.779594   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:31.779601   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:31.779652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:31.812047   86402 cri.go:89] found id: ""
	I1104 12:10:31.812076   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.812087   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:31.812094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:31.812163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:31.845479   86402 cri.go:89] found id: ""
	I1104 12:10:31.845510   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.845522   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:31.845532   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:31.845551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:31.909399   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:31.909423   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:31.909434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:31.985994   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:31.986031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:32.023222   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:32.023255   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:32.074429   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:32.074467   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.588202   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:34.600925   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:34.600994   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:34.632718   86402 cri.go:89] found id: ""
	I1104 12:10:34.632743   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.632754   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:34.632763   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:34.632813   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:34.665553   86402 cri.go:89] found id: ""
	I1104 12:10:34.665576   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.665585   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:34.665590   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:34.665641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:34.700059   86402 cri.go:89] found id: ""
	I1104 12:10:34.700081   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.700089   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:34.700094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:34.700141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:34.732940   86402 cri.go:89] found id: ""
	I1104 12:10:34.732962   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.732970   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:34.732978   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:34.733023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:34.764580   86402 cri.go:89] found id: ""
	I1104 12:10:34.764610   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.764618   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:34.764624   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:34.764680   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:34.798030   86402 cri.go:89] found id: ""
	I1104 12:10:34.798053   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.798061   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:34.798067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:34.798115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:34.829847   86402 cri.go:89] found id: ""
	I1104 12:10:34.829876   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.829884   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:34.829889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:34.829946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:34.862764   86402 cri.go:89] found id: ""
	I1104 12:10:34.862792   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.862804   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:34.862815   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:34.862828   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:34.912367   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:34.912397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.925347   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:34.925383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:34.990459   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:34.990486   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:34.990502   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:35.066765   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:35.066796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:36.706912   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.707144   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.056279   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:39.555433   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.349986   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:40.354694   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.602696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:37.615041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:37.615115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:37.646872   86402 cri.go:89] found id: ""
	I1104 12:10:37.646900   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.646911   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:37.646918   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:37.646977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:37.679770   86402 cri.go:89] found id: ""
	I1104 12:10:37.679797   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.679805   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:37.679810   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:37.679867   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:37.711693   86402 cri.go:89] found id: ""
	I1104 12:10:37.711720   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.711733   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:37.711743   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:37.711803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:37.746605   86402 cri.go:89] found id: ""
	I1104 12:10:37.746636   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.746648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:37.746656   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:37.746716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:37.778983   86402 cri.go:89] found id: ""
	I1104 12:10:37.779010   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.779020   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:37.779026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:37.779086   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:37.813293   86402 cri.go:89] found id: ""
	I1104 12:10:37.813321   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.813330   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:37.813335   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:37.813387   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:37.846181   86402 cri.go:89] found id: ""
	I1104 12:10:37.846209   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.846219   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:37.846226   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:37.846287   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:37.877485   86402 cri.go:89] found id: ""
	I1104 12:10:37.877520   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.877531   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:37.877541   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:37.877558   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:37.926704   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:37.926733   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:37.939771   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:37.939796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:38.003762   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:38.003783   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:38.003800   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:38.085419   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:38.085456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.625351   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:40.637380   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:40.637459   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:40.670274   86402 cri.go:89] found id: ""
	I1104 12:10:40.670303   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.670315   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:40.670322   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:40.670382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:40.703383   86402 cri.go:89] found id: ""
	I1104 12:10:40.703414   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.703427   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:40.703434   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:40.703481   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:40.739549   86402 cri.go:89] found id: ""
	I1104 12:10:40.739576   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.739586   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:40.739594   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:40.739651   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:40.775466   86402 cri.go:89] found id: ""
	I1104 12:10:40.775492   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.775502   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:40.775513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:40.775567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:40.810486   86402 cri.go:89] found id: ""
	I1104 12:10:40.810515   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.810525   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:40.810533   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:40.810593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:40.844277   86402 cri.go:89] found id: ""
	I1104 12:10:40.844309   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.844321   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:40.844329   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:40.844391   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:40.878699   86402 cri.go:89] found id: ""
	I1104 12:10:40.878728   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.878739   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:40.878746   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:40.878804   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:40.913888   86402 cri.go:89] found id: ""
	I1104 12:10:40.913913   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.913921   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:40.913929   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:40.913939   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:40.966854   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:40.966892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:40.980483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:40.980510   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:41.046059   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:41.046085   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:41.046100   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:41.129746   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:41.129779   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.707964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.207804   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.057019   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.555947   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.850057   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.851467   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.667029   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:43.680024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:43.680092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:43.714185   86402 cri.go:89] found id: ""
	I1104 12:10:43.714218   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.714227   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:43.714235   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:43.714294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:43.749493   86402 cri.go:89] found id: ""
	I1104 12:10:43.749515   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.749523   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:43.749529   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:43.749588   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:43.785400   86402 cri.go:89] found id: ""
	I1104 12:10:43.785426   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.785437   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:43.785444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:43.785507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:43.818465   86402 cri.go:89] found id: ""
	I1104 12:10:43.818505   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.818517   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:43.818524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:43.818573   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:43.850232   86402 cri.go:89] found id: ""
	I1104 12:10:43.850262   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.850272   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:43.850279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:43.850337   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:43.882806   86402 cri.go:89] found id: ""
	I1104 12:10:43.882840   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.882851   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:43.882859   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:43.882920   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:43.919449   86402 cri.go:89] found id: ""
	I1104 12:10:43.919476   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.919486   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:43.919493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:43.919556   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:43.953761   86402 cri.go:89] found id: ""
	I1104 12:10:43.953791   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.953801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:43.953812   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:43.953825   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:44.005559   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:44.005594   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:44.019431   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:44.019456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:44.094436   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:44.094457   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:44.094470   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:44.174026   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:44.174061   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:45.707449   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:47.709901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.557050   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:48.557552   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.851720   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:49.350269   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.712021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:46.724258   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:46.724318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:46.754472   86402 cri.go:89] found id: ""
	I1104 12:10:46.754501   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.754510   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:46.754515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:46.754563   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:46.790184   86402 cri.go:89] found id: ""
	I1104 12:10:46.790209   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.790219   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:46.790226   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:46.790284   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:46.824840   86402 cri.go:89] found id: ""
	I1104 12:10:46.824865   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.824875   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:46.824882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:46.824952   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:46.857295   86402 cri.go:89] found id: ""
	I1104 12:10:46.857329   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.857360   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:46.857369   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:46.857430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:46.889540   86402 cri.go:89] found id: ""
	I1104 12:10:46.889571   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.889582   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:46.889588   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:46.889652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:46.930165   86402 cri.go:89] found id: ""
	I1104 12:10:46.930195   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.930204   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:46.930210   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:46.930266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:46.965964   86402 cri.go:89] found id: ""
	I1104 12:10:46.965994   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.966006   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:46.966013   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:46.966060   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:47.002700   86402 cri.go:89] found id: ""
	I1104 12:10:47.002732   86402 logs.go:282] 0 containers: []
	W1104 12:10:47.002741   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:47.002749   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:47.002760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:47.056362   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:47.056392   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:47.070447   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:47.070472   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:47.143207   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:47.143240   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:47.143256   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:47.223985   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:47.224015   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:49.765870   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:49.778288   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:49.778352   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:49.812012   86402 cri.go:89] found id: ""
	I1104 12:10:49.812044   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.812054   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:49.812064   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:49.812115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:49.847260   86402 cri.go:89] found id: ""
	I1104 12:10:49.847290   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.847301   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:49.847308   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:49.847361   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:49.877397   86402 cri.go:89] found id: ""
	I1104 12:10:49.877419   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.877427   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:49.877432   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:49.877486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:49.912453   86402 cri.go:89] found id: ""
	I1104 12:10:49.912484   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.912499   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:49.912506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:49.912572   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:49.948374   86402 cri.go:89] found id: ""
	I1104 12:10:49.948404   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.948416   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:49.948422   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:49.948488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:49.982190   86402 cri.go:89] found id: ""
	I1104 12:10:49.982216   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.982228   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:49.982236   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:49.982294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:50.014396   86402 cri.go:89] found id: ""
	I1104 12:10:50.014426   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.014437   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:50.014445   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:50.014507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:50.051770   86402 cri.go:89] found id: ""
	I1104 12:10:50.051793   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.051801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:50.051809   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:50.051820   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:50.116158   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:50.116185   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:50.116202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:50.194382   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:50.194431   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:50.235957   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:50.235983   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:50.290720   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:50.290750   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:50.207837   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.207972   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.208026   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.055965   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:53.056014   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:55.056318   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.850513   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.351193   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.805144   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:52.817686   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:52.817753   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:52.852470   86402 cri.go:89] found id: ""
	I1104 12:10:52.852492   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.852546   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:52.852559   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:52.852603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:52.889682   86402 cri.go:89] found id: ""
	I1104 12:10:52.889705   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.889714   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:52.889720   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:52.889773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:52.924490   86402 cri.go:89] found id: ""
	I1104 12:10:52.924525   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.924537   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:52.924544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:52.924604   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:52.957055   86402 cri.go:89] found id: ""
	I1104 12:10:52.957085   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.957094   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:52.957099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:52.957143   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:52.993379   86402 cri.go:89] found id: ""
	I1104 12:10:52.993411   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.993423   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:52.993430   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:52.993493   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:53.027365   86402 cri.go:89] found id: ""
	I1104 12:10:53.027398   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.027407   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:53.027412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:53.027488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:53.061048   86402 cri.go:89] found id: ""
	I1104 12:10:53.061074   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.061082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:53.061089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:53.061163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:53.101867   86402 cri.go:89] found id: ""
	I1104 12:10:53.101894   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.101904   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:53.101915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:53.101927   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:53.152314   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:53.152351   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:53.165630   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:53.165657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:53.239717   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:53.239739   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:53.239753   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:53.318140   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:53.318186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:55.857443   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:55.869524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:55.869608   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:55.900719   86402 cri.go:89] found id: ""
	I1104 12:10:55.900743   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.900753   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:55.900761   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:55.900821   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:55.932699   86402 cri.go:89] found id: ""
	I1104 12:10:55.932724   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.932734   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:55.932741   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:55.932798   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:55.964729   86402 cri.go:89] found id: ""
	I1104 12:10:55.964758   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.964767   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:55.964775   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:55.964823   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:55.997870   86402 cri.go:89] found id: ""
	I1104 12:10:55.997897   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.997907   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:55.997915   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:55.997977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:56.031707   86402 cri.go:89] found id: ""
	I1104 12:10:56.031736   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.031744   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:56.031749   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:56.031805   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:56.070839   86402 cri.go:89] found id: ""
	I1104 12:10:56.070863   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.070871   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:56.070877   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:56.070922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:56.109364   86402 cri.go:89] found id: ""
	I1104 12:10:56.109393   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.109404   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:56.109412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:56.109474   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:56.143369   86402 cri.go:89] found id: ""
	I1104 12:10:56.143402   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.143414   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:56.143424   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:56.143437   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:56.156924   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:56.156952   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:56.223624   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:56.223647   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:56.223659   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:56.302040   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:56.302082   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:56.343102   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:56.343150   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:56.209085   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.712250   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:57.056463   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:59.555744   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:56.850242   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.850955   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.896551   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:58.909034   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:58.909110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:58.944520   86402 cri.go:89] found id: ""
	I1104 12:10:58.944550   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.944559   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:58.944565   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:58.944612   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:58.980137   86402 cri.go:89] found id: ""
	I1104 12:10:58.980167   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.980176   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:58.980181   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:58.980231   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:59.014505   86402 cri.go:89] found id: ""
	I1104 12:10:59.014536   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.014545   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:59.014551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:59.014602   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:59.050616   86402 cri.go:89] found id: ""
	I1104 12:10:59.050642   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.050652   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:59.050659   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:59.050718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:59.084328   86402 cri.go:89] found id: ""
	I1104 12:10:59.084358   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.084369   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:59.084376   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:59.084449   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:59.116607   86402 cri.go:89] found id: ""
	I1104 12:10:59.116633   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.116642   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:59.116649   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:59.116711   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:59.149727   86402 cri.go:89] found id: ""
	I1104 12:10:59.149754   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.149765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:59.149773   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:59.149832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:59.182992   86402 cri.go:89] found id: ""
	I1104 12:10:59.183023   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.183035   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:59.183045   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:59.183059   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:59.234826   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:59.234862   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:59.248401   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:59.248427   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:59.317143   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:59.317171   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:59.317186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:59.397294   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:59.397336   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:01.208022   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.707297   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.556680   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:04.055902   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.350865   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.850510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.933617   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:01.946458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:01.946537   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:01.981652   86402 cri.go:89] found id: ""
	I1104 12:11:01.981682   86402 logs.go:282] 0 containers: []
	W1104 12:11:01.981693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:01.981701   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:01.981757   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:02.014245   86402 cri.go:89] found id: ""
	I1104 12:11:02.014273   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.014282   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:02.014287   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:02.014350   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:02.047386   86402 cri.go:89] found id: ""
	I1104 12:11:02.047409   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.047420   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:02.047427   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:02.047488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:02.086427   86402 cri.go:89] found id: ""
	I1104 12:11:02.086464   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.086475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:02.086483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:02.086544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:02.120219   86402 cri.go:89] found id: ""
	I1104 12:11:02.120246   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.120255   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:02.120260   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:02.120318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:02.153832   86402 cri.go:89] found id: ""
	I1104 12:11:02.153864   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.153876   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:02.153884   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:02.153950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:02.186237   86402 cri.go:89] found id: ""
	I1104 12:11:02.186266   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.186278   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:02.186285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:02.186351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:02.219238   86402 cri.go:89] found id: ""
	I1104 12:11:02.219269   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.219280   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:02.219290   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:02.219301   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:02.301062   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:02.301099   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:02.358585   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:02.358617   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:02.414153   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:02.414200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:02.428429   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:02.428456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:02.497040   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:04.998089   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:05.010890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:05.010947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:05.046483   86402 cri.go:89] found id: ""
	I1104 12:11:05.046513   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.046523   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:05.046534   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:05.046594   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:05.079487   86402 cri.go:89] found id: ""
	I1104 12:11:05.079516   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.079527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:05.079535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:05.079595   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:05.110968   86402 cri.go:89] found id: ""
	I1104 12:11:05.110997   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.111004   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:05.111010   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:05.111057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:05.143372   86402 cri.go:89] found id: ""
	I1104 12:11:05.143398   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.143408   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:05.143415   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:05.143484   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:05.174691   86402 cri.go:89] found id: ""
	I1104 12:11:05.174717   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.174730   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:05.174737   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:05.174802   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:05.210005   86402 cri.go:89] found id: ""
	I1104 12:11:05.210025   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.210033   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:05.210041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:05.210085   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:05.244874   86402 cri.go:89] found id: ""
	I1104 12:11:05.244899   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.244908   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:05.244913   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:05.244956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:05.276517   86402 cri.go:89] found id: ""
	I1104 12:11:05.276547   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.276557   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:05.276568   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:05.276581   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:05.354057   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:05.354087   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:05.390848   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:05.390887   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:05.442659   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:05.442692   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:05.456290   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:05.456315   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:05.530310   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:06.207301   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.208333   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.056314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.556910   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.350241   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.350774   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:10.351274   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.030545   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:08.043598   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:08.043654   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:08.081604   86402 cri.go:89] found id: ""
	I1104 12:11:08.081634   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.081644   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:08.081652   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:08.081712   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:08.135357   86402 cri.go:89] found id: ""
	I1104 12:11:08.135388   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.135398   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:08.135405   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:08.135470   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:08.173275   86402 cri.go:89] found id: ""
	I1104 12:11:08.173298   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.173306   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:08.173311   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:08.173371   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:08.213415   86402 cri.go:89] found id: ""
	I1104 12:11:08.213439   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.213448   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:08.213454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:08.213507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:08.244759   86402 cri.go:89] found id: ""
	I1104 12:11:08.244791   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.244802   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:08.244809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:08.244870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:08.276643   86402 cri.go:89] found id: ""
	I1104 12:11:08.276666   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.276675   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:08.276682   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:08.276751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:08.308425   86402 cri.go:89] found id: ""
	I1104 12:11:08.308451   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.308462   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:08.308469   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:08.308527   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:08.340645   86402 cri.go:89] found id: ""
	I1104 12:11:08.340675   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.340687   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:08.340698   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:08.340712   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:08.413171   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.413196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:08.413214   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:08.496208   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:08.496246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:08.534527   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:08.534560   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:08.583515   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:08.583550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:11.099000   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:11.112158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:11.112236   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:11.145718   86402 cri.go:89] found id: ""
	I1104 12:11:11.145748   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.145758   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:11.145765   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:11.145958   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:11.177270   86402 cri.go:89] found id: ""
	I1104 12:11:11.177301   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.177317   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:11.177325   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:11.177396   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:11.209696   86402 cri.go:89] found id: ""
	I1104 12:11:11.209722   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.209737   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:11.209742   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:11.209789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:11.244034   86402 cri.go:89] found id: ""
	I1104 12:11:11.244061   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.244069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:11.244078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:11.244135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:11.276437   86402 cri.go:89] found id: ""
	I1104 12:11:11.276462   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.276470   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:11.276476   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:11.276530   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:11.308954   86402 cri.go:89] found id: ""
	I1104 12:11:11.308980   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.308988   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:11.308994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:11.309057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:11.342175   86402 cri.go:89] found id: ""
	I1104 12:11:11.342199   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.342207   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:11.342211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:11.342266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:11.374810   86402 cri.go:89] found id: ""
	I1104 12:11:11.374839   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.374851   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:11.374860   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:11.374875   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:11.443638   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:11.443667   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:11.443681   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:11.526996   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:11.527031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:11.568297   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:11.568325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:11.616229   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:11.616264   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:10.707934   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.708053   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:11.055469   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:13.055645   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.057348   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.851253   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.350857   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:14.130707   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:14.143045   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:14.143116   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:14.185422   86402 cri.go:89] found id: ""
	I1104 12:11:14.185461   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.185471   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:14.185477   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:14.185525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:14.219890   86402 cri.go:89] found id: ""
	I1104 12:11:14.219918   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.219928   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:14.219938   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:14.219985   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:14.253256   86402 cri.go:89] found id: ""
	I1104 12:11:14.253286   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.253296   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:14.253304   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:14.253364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:14.286228   86402 cri.go:89] found id: ""
	I1104 12:11:14.286259   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.286271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:14.286279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:14.286342   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:14.317065   86402 cri.go:89] found id: ""
	I1104 12:11:14.317091   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.317101   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:14.317106   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:14.317168   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:14.348540   86402 cri.go:89] found id: ""
	I1104 12:11:14.348575   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.348583   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:14.348589   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:14.348647   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:14.380824   86402 cri.go:89] found id: ""
	I1104 12:11:14.380849   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.380858   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:14.380863   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:14.380924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:14.413757   86402 cri.go:89] found id: ""
	I1104 12:11:14.413785   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.413796   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:14.413806   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:14.413822   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:14.479311   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:14.479336   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:14.479349   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:14.572923   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:14.572959   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:14.620277   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:14.620359   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:14.674276   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:14.674310   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:15.208704   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.708523   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.555941   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.556233   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.351751   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.851087   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.187062   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:17.200179   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:17.200260   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:17.232208   86402 cri.go:89] found id: ""
	I1104 12:11:17.232231   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.232238   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:17.232244   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:17.232298   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:17.266224   86402 cri.go:89] found id: ""
	I1104 12:11:17.266248   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.266257   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:17.266262   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:17.266320   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:17.301909   86402 cri.go:89] found id: ""
	I1104 12:11:17.301940   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.301948   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:17.301953   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:17.302005   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:17.339493   86402 cri.go:89] found id: ""
	I1104 12:11:17.339517   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.339530   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:17.339537   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:17.339600   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:17.373879   86402 cri.go:89] found id: ""
	I1104 12:11:17.373927   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.373938   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:17.373945   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:17.373996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:17.405533   86402 cri.go:89] found id: ""
	I1104 12:11:17.405562   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.405573   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:17.405583   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:17.405645   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:17.439421   86402 cri.go:89] found id: ""
	I1104 12:11:17.439451   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.439460   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:17.439468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:17.439532   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:17.474573   86402 cri.go:89] found id: ""
	I1104 12:11:17.474602   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.474613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:17.474623   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:17.474636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:17.524497   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:17.524536   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.538421   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:17.538460   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:17.607299   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:17.607323   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:17.607337   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:17.684181   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:17.684224   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.223600   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:20.237793   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:20.237865   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:20.279656   86402 cri.go:89] found id: ""
	I1104 12:11:20.279682   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.279693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:20.279700   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:20.279767   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:20.337980   86402 cri.go:89] found id: ""
	I1104 12:11:20.338009   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.338020   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:20.338027   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:20.338087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:20.383183   86402 cri.go:89] found id: ""
	I1104 12:11:20.383217   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.383226   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:20.383231   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:20.383282   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:20.416470   86402 cri.go:89] found id: ""
	I1104 12:11:20.416495   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.416505   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:20.416512   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:20.416570   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:20.451968   86402 cri.go:89] found id: ""
	I1104 12:11:20.452000   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.452011   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:20.452017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:20.452074   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:20.484800   86402 cri.go:89] found id: ""
	I1104 12:11:20.484823   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.484831   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:20.484837   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:20.484893   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:20.516263   86402 cri.go:89] found id: ""
	I1104 12:11:20.516292   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.516300   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:20.516306   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:20.516364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:20.548616   86402 cri.go:89] found id: ""
	I1104 12:11:20.548640   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.548651   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:20.548661   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:20.548674   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:20.599338   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:20.599368   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:20.613116   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:20.613148   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:20.678898   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:20.678924   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:20.678936   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:20.757570   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:20.757606   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.206649   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.207379   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.207579   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.056670   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.555101   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.350891   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.351318   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:23.293912   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:23.307037   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:23.307110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:23.341161   86402 cri.go:89] found id: ""
	I1104 12:11:23.341186   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.341195   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:23.341200   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:23.341277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:23.373462   86402 cri.go:89] found id: ""
	I1104 12:11:23.373491   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.373503   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:23.373510   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:23.373568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:23.404439   86402 cri.go:89] found id: ""
	I1104 12:11:23.404471   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.404482   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:23.404489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:23.404548   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:23.435224   86402 cri.go:89] found id: ""
	I1104 12:11:23.435256   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.435267   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:23.435274   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:23.435336   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:23.472593   86402 cri.go:89] found id: ""
	I1104 12:11:23.472622   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.472633   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:23.472641   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:23.472693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:23.503413   86402 cri.go:89] found id: ""
	I1104 12:11:23.503438   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.503447   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:23.503454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:23.503516   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:23.537582   86402 cri.go:89] found id: ""
	I1104 12:11:23.537610   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.537621   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:23.537628   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:23.537689   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:23.573799   86402 cri.go:89] found id: ""
	I1104 12:11:23.573824   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.573831   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:23.573838   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:23.573851   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:23.649239   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:23.649273   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.686518   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:23.686548   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:23.738955   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:23.738987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:23.751909   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:23.751935   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:23.827244   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.327902   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:26.339708   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:26.339784   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:26.369615   86402 cri.go:89] found id: ""
	I1104 12:11:26.369644   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.369653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:26.369659   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:26.369715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:26.402027   86402 cri.go:89] found id: ""
	I1104 12:11:26.402056   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.402065   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:26.402070   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:26.402123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:26.433483   86402 cri.go:89] found id: ""
	I1104 12:11:26.433512   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.433523   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:26.433529   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:26.433637   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:26.466403   86402 cri.go:89] found id: ""
	I1104 12:11:26.466442   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.466453   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:26.466468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:26.466524   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:26.499818   86402 cri.go:89] found id: ""
	I1104 12:11:26.499853   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.499864   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:26.499871   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:26.499930   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:26.537782   86402 cri.go:89] found id: ""
	I1104 12:11:26.537809   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.537822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:26.537830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:26.537890   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:26.574091   86402 cri.go:89] found id: ""
	I1104 12:11:26.574120   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.574131   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:26.574138   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:26.574199   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:26.607554   86402 cri.go:89] found id: ""
	I1104 12:11:26.607584   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.607596   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:26.607606   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:26.607620   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:26.657405   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:26.657443   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:26.670022   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:26.670046   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:11:26.707958   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.207380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.556568   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:28.557276   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.852761   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.351303   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	W1104 12:11:26.736238   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.736266   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:26.736278   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:26.816277   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:26.816309   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:29.357639   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:29.371116   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:29.371204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:29.405569   86402 cri.go:89] found id: ""
	I1104 12:11:29.405595   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.405604   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:29.405611   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:29.405668   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:29.435669   86402 cri.go:89] found id: ""
	I1104 12:11:29.435697   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.435709   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:29.435716   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:29.435781   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:29.476208   86402 cri.go:89] found id: ""
	I1104 12:11:29.476236   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.476245   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:29.476251   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:29.476305   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:29.511446   86402 cri.go:89] found id: ""
	I1104 12:11:29.511474   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.511483   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:29.511489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:29.511541   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:29.543714   86402 cri.go:89] found id: ""
	I1104 12:11:29.543742   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.543754   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:29.543761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:29.543840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:29.577429   86402 cri.go:89] found id: ""
	I1104 12:11:29.577456   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.577466   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:29.577473   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:29.577534   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:29.608430   86402 cri.go:89] found id: ""
	I1104 12:11:29.608457   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.608475   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:29.608483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:29.608539   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:29.640029   86402 cri.go:89] found id: ""
	I1104 12:11:29.640057   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.640068   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:29.640078   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:29.640092   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:29.691170   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:29.691202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:29.704949   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:29.704987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:29.766856   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:29.766884   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:29.766898   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:29.847487   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:29.847525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:31.208725   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.709593   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:30.557500   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.056569   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:31.851101   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:34.350356   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:32.382925   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:32.395889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:32.395943   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:32.428711   86402 cri.go:89] found id: ""
	I1104 12:11:32.428736   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.428749   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:32.428755   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:32.428810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:32.463269   86402 cri.go:89] found id: ""
	I1104 12:11:32.463295   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.463307   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:32.463313   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:32.463372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:32.496098   86402 cri.go:89] found id: ""
	I1104 12:11:32.496125   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.496135   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:32.496142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:32.496213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:32.528729   86402 cri.go:89] found id: ""
	I1104 12:11:32.528760   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.528771   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:32.528778   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:32.528860   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:32.567290   86402 cri.go:89] found id: ""
	I1104 12:11:32.567321   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.567332   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:32.567338   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:32.567397   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:32.608932   86402 cri.go:89] found id: ""
	I1104 12:11:32.608962   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.608973   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:32.608980   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:32.609037   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:32.641128   86402 cri.go:89] found id: ""
	I1104 12:11:32.641155   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.641164   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:32.641171   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:32.641239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:32.675651   86402 cri.go:89] found id: ""
	I1104 12:11:32.675682   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.675694   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:32.675704   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:32.675719   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:32.742369   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:32.742406   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:32.742419   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:32.823371   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:32.823412   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.862243   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:32.862270   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:32.910961   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:32.910987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.425742   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:35.438553   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:35.438615   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:35.475160   86402 cri.go:89] found id: ""
	I1104 12:11:35.475189   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.475201   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:35.475209   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:35.475267   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:35.517193   86402 cri.go:89] found id: ""
	I1104 12:11:35.517239   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.517252   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:35.517260   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:35.517329   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:35.552941   86402 cri.go:89] found id: ""
	I1104 12:11:35.552967   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.552978   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:35.552985   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:35.553056   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:35.589960   86402 cri.go:89] found id: ""
	I1104 12:11:35.589983   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.589994   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:35.590001   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:35.590063   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:35.624546   86402 cri.go:89] found id: ""
	I1104 12:11:35.624575   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.624587   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:35.624595   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:35.624655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:35.657855   86402 cri.go:89] found id: ""
	I1104 12:11:35.657885   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.657896   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:35.657903   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:35.657957   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:35.691465   86402 cri.go:89] found id: ""
	I1104 12:11:35.691498   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.691509   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:35.691516   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:35.691587   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:35.727520   86402 cri.go:89] found id: ""
	I1104 12:11:35.727548   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.727558   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:35.727569   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:35.727584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:35.777876   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:35.777912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.790790   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:35.790817   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:35.856780   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:35.856805   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:35.856819   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:35.936769   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:35.936812   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:36.207096   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.707776   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:35.556694   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.055778   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:36.850946   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:39.350058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.474827   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:38.488151   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:38.488221   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:38.523010   86402 cri.go:89] found id: ""
	I1104 12:11:38.523042   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.523053   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:38.523061   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:38.523117   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:38.558065   86402 cri.go:89] found id: ""
	I1104 12:11:38.558093   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.558102   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:38.558107   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:38.558153   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:38.590676   86402 cri.go:89] found id: ""
	I1104 12:11:38.590704   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.590715   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:38.590723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:38.590780   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:38.623762   86402 cri.go:89] found id: ""
	I1104 12:11:38.623793   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.623804   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:38.623811   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:38.623870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:38.655918   86402 cri.go:89] found id: ""
	I1104 12:11:38.655947   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.655958   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:38.655966   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:38.656028   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:38.691200   86402 cri.go:89] found id: ""
	I1104 12:11:38.691228   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.691238   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:38.691245   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:38.691302   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:38.724725   86402 cri.go:89] found id: ""
	I1104 12:11:38.724748   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.724756   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:38.724761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:38.724819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:38.756333   86402 cri.go:89] found id: ""
	I1104 12:11:38.756360   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.756370   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:38.756381   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:38.756395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:38.807722   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:38.807756   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:38.821055   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:38.821079   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:38.886629   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:38.886656   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:38.886671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:38.960958   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:38.960999   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:41.503471   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:41.515994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:41.516065   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:41.549936   86402 cri.go:89] found id: ""
	I1104 12:11:41.549960   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.549968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:41.549975   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:41.550033   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:41.584565   86402 cri.go:89] found id: ""
	I1104 12:11:41.584590   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.584602   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:41.584610   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:41.584660   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:41.616427   86402 cri.go:89] found id: ""
	I1104 12:11:41.616450   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.616458   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:41.616463   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:41.616510   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:41.650835   86402 cri.go:89] found id: ""
	I1104 12:11:41.650864   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.650875   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:41.650882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:41.650946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:40.707926   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.207969   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:40.555616   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:42.555839   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:44.556749   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.351131   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.851925   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.685899   86402 cri.go:89] found id: ""
	I1104 12:11:41.685921   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.685928   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:41.685934   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:41.685979   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:41.718730   86402 cri.go:89] found id: ""
	I1104 12:11:41.718757   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.718773   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:41.718782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:41.718837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:41.748843   86402 cri.go:89] found id: ""
	I1104 12:11:41.748875   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.748887   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:41.748895   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:41.748963   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:41.780225   86402 cri.go:89] found id: ""
	I1104 12:11:41.780251   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.780260   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:41.780268   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:41.780285   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:41.830864   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:41.830893   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:41.844252   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:41.844279   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:41.908514   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:41.908542   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:41.908554   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:41.988545   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:41.988582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:44.527641   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:44.540026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:44.540108   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:44.574530   86402 cri.go:89] found id: ""
	I1104 12:11:44.574559   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.574570   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:44.574577   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:44.574638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:44.606073   86402 cri.go:89] found id: ""
	I1104 12:11:44.606103   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.606114   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:44.606121   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:44.606185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:44.639750   86402 cri.go:89] found id: ""
	I1104 12:11:44.639775   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.639784   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:44.639792   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:44.639850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:44.673528   86402 cri.go:89] found id: ""
	I1104 12:11:44.673557   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.673565   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:44.673573   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:44.673625   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:44.705928   86402 cri.go:89] found id: ""
	I1104 12:11:44.705956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.705966   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:44.705973   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:44.706032   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:44.736779   86402 cri.go:89] found id: ""
	I1104 12:11:44.736811   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.736822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:44.736830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:44.736886   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:44.769929   86402 cri.go:89] found id: ""
	I1104 12:11:44.769956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.769964   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:44.769970   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:44.770015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:44.800818   86402 cri.go:89] found id: ""
	I1104 12:11:44.800846   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.800855   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:44.800863   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:44.800873   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:44.853610   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:44.853641   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:44.866656   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:44.866683   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:44.936386   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:44.936412   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:44.936425   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:45.011789   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:45.011823   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:45.707030   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.707464   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.711329   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.557112   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.055647   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.351055   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:48.850134   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:50.851867   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.548672   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:47.563082   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:47.563157   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:47.598722   86402 cri.go:89] found id: ""
	I1104 12:11:47.598748   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.598756   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:47.598762   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:47.598809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:47.633376   86402 cri.go:89] found id: ""
	I1104 12:11:47.633412   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.633421   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:47.633428   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:47.633486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:47.666059   86402 cri.go:89] found id: ""
	I1104 12:11:47.666087   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.666095   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:47.666101   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:47.666147   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:47.700659   86402 cri.go:89] found id: ""
	I1104 12:11:47.700690   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.700704   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:47.700711   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:47.700771   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:47.732901   86402 cri.go:89] found id: ""
	I1104 12:11:47.732927   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.732934   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:47.732940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:47.732984   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:47.765371   86402 cri.go:89] found id: ""
	I1104 12:11:47.765398   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.765418   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:47.765425   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:47.765487   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:47.797043   86402 cri.go:89] found id: ""
	I1104 12:11:47.797077   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.797089   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:47.797096   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:47.797159   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:47.828140   86402 cri.go:89] found id: ""
	I1104 12:11:47.828172   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.828184   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:47.828194   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:47.828208   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:47.911398   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:47.911434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.948042   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:47.948071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:47.999603   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:47.999638   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:48.013818   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:48.013856   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:48.082679   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.583325   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:50.595272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:50.595346   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:50.630857   86402 cri.go:89] found id: ""
	I1104 12:11:50.630883   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.630892   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:50.630899   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:50.630965   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:50.663025   86402 cri.go:89] found id: ""
	I1104 12:11:50.663049   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.663058   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:50.663063   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:50.663109   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:50.695371   86402 cri.go:89] found id: ""
	I1104 12:11:50.695402   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.695413   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:50.695421   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:50.695480   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:50.728805   86402 cri.go:89] found id: ""
	I1104 12:11:50.728827   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.728836   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:50.728841   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:50.728902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:50.762837   86402 cri.go:89] found id: ""
	I1104 12:11:50.762868   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.762878   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:50.762885   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:50.762941   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:50.802531   86402 cri.go:89] found id: ""
	I1104 12:11:50.802556   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.802564   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:50.802569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:50.802613   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:50.835124   86402 cri.go:89] found id: ""
	I1104 12:11:50.835161   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.835173   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:50.835180   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:50.835234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:50.869265   86402 cri.go:89] found id: ""
	I1104 12:11:50.869295   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.869308   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:50.869318   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:50.869330   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:50.919371   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:50.919405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:50.932165   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:50.932195   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:50.993935   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.993959   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:50.993972   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:51.071816   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:51.071848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:52.208101   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:54.707467   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:51.056129   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.057025   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.349902   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.350302   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.608347   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:53.620842   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:53.620902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:53.652870   86402 cri.go:89] found id: ""
	I1104 12:11:53.652896   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.652909   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:53.652917   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:53.652980   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:53.684842   86402 cri.go:89] found id: ""
	I1104 12:11:53.684878   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.684889   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:53.684897   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:53.684956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:53.722505   86402 cri.go:89] found id: ""
	I1104 12:11:53.722531   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.722539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:53.722544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:53.722603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:53.753831   86402 cri.go:89] found id: ""
	I1104 12:11:53.753858   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.753866   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:53.753872   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:53.753918   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:53.786112   86402 cri.go:89] found id: ""
	I1104 12:11:53.786139   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.786150   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:53.786157   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:53.786218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:53.820446   86402 cri.go:89] found id: ""
	I1104 12:11:53.820472   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.820487   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:53.820493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:53.820552   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:53.855631   86402 cri.go:89] found id: ""
	I1104 12:11:53.855655   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.855665   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:53.855673   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:53.855727   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:53.887953   86402 cri.go:89] found id: ""
	I1104 12:11:53.887983   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.887994   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:53.888004   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:53.888023   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:53.954408   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:53.954430   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:53.954442   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:54.028549   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:54.028584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:54.070869   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:54.070895   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:54.123676   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:54.123715   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:56.639480   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:56.652651   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:56.652709   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:56.708211   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.555992   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:58.056271   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:57.350474   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.850830   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:56.689397   86402 cri.go:89] found id: ""
	I1104 12:11:56.689425   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.689443   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:56.689452   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:56.689517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:56.725197   86402 cri.go:89] found id: ""
	I1104 12:11:56.725234   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.725246   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:56.725254   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:56.725308   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:56.759043   86402 cri.go:89] found id: ""
	I1104 12:11:56.759073   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.759084   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:56.759090   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:56.759141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:56.792268   86402 cri.go:89] found id: ""
	I1104 12:11:56.792296   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.792307   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:56.792314   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:56.792375   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:56.823668   86402 cri.go:89] found id: ""
	I1104 12:11:56.823692   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.823702   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:56.823709   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:56.823769   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:56.861812   86402 cri.go:89] found id: ""
	I1104 12:11:56.861837   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.861845   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:56.861851   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:56.861902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:56.894037   86402 cri.go:89] found id: ""
	I1104 12:11:56.894067   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.894075   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:56.894080   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:56.894133   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:56.925603   86402 cri.go:89] found id: ""
	I1104 12:11:56.925634   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.925646   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:56.925656   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:56.925669   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:56.961504   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:56.961530   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:57.012666   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:57.012700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:57.025887   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:57.025921   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:57.097219   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:57.097257   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:57.097272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:59.671179   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:59.684642   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:59.684718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:59.721599   86402 cri.go:89] found id: ""
	I1104 12:11:59.721622   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.721631   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:59.721640   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:59.721693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:59.757423   86402 cri.go:89] found id: ""
	I1104 12:11:59.757453   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.757461   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:59.757466   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:59.757525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:59.794036   86402 cri.go:89] found id: ""
	I1104 12:11:59.794071   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.794081   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:59.794089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:59.794148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:59.830098   86402 cri.go:89] found id: ""
	I1104 12:11:59.830123   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.830134   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:59.830142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:59.830207   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:59.867791   86402 cri.go:89] found id: ""
	I1104 12:11:59.867815   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.867823   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:59.867828   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:59.867879   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:59.903579   86402 cri.go:89] found id: ""
	I1104 12:11:59.903607   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.903614   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:59.903620   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:59.903667   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:59.940955   86402 cri.go:89] found id: ""
	I1104 12:11:59.940977   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.940984   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:59.940989   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:59.941034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:59.977626   86402 cri.go:89] found id: ""
	I1104 12:11:59.977653   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.977663   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:59.977674   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:59.977687   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:00.032280   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:00.032312   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:00.045965   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:00.045991   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:00.123578   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:00.123608   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:00.123625   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:00.208309   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:00.208340   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:01.707661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.207140   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:00.555683   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:02.555797   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.556558   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851646   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851680   85759 pod_ready.go:82] duration metric: took 4m0.007179751s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:01.851691   85759 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:01.851701   85759 pod_ready.go:39] duration metric: took 4m4.052369029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:01.851721   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:01.851752   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:01.851805   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:01.891029   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:01.891056   85759 cri.go:89] found id: ""
	I1104 12:12:01.891066   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:01.891128   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.895134   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:01.895243   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:01.928058   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:01.928081   85759 cri.go:89] found id: ""
	I1104 12:12:01.928089   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:01.928134   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.931906   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:01.931974   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:01.972023   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:01.972052   85759 cri.go:89] found id: ""
	I1104 12:12:01.972062   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:01.972116   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.980811   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:01.980894   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.024013   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.024038   85759 cri.go:89] found id: ""
	I1104 12:12:02.024046   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:02.024108   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.028571   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.028641   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.063545   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:02.063570   85759 cri.go:89] found id: ""
	I1104 12:12:02.063580   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:02.063635   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.067582   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.067652   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.100954   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.100979   85759 cri.go:89] found id: ""
	I1104 12:12:02.100989   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:02.101038   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.105121   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.105182   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.137206   85759 cri.go:89] found id: ""
	I1104 12:12:02.137249   85759 logs.go:282] 0 containers: []
	W1104 12:12:02.137260   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.137268   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:02.137317   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:02.171499   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:02.171520   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.171526   85759 cri.go:89] found id: ""
	I1104 12:12:02.171535   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:02.171587   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.175646   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.179066   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:02.179084   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.249087   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:02.249126   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:02.262666   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:02.262692   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:02.316826   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:02.316856   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.351741   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:02.351766   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.400377   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:02.400409   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.448029   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:02.448059   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:02.975331   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:02.975367   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:03.026697   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.026739   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:03.140704   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:03.140753   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:03.192394   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:03.192427   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:03.236040   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:03.236071   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:03.275166   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:03.275194   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:05.813333   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.827697   85759 api_server.go:72] duration metric: took 4m15.741105379s to wait for apiserver process to appear ...
	I1104 12:12:05.827725   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:05.827763   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.827826   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.869552   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:05.869580   85759 cri.go:89] found id: ""
	I1104 12:12:05.869590   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:05.869642   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.873890   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.873954   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.914131   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:05.914153   85759 cri.go:89] found id: ""
	I1104 12:12:05.914161   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:05.914216   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.920980   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.921042   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.960930   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:05.960953   85759 cri.go:89] found id: ""
	I1104 12:12:05.960962   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:05.961018   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.965591   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.965653   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:06.000500   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:06.000520   85759 cri.go:89] found id: ""
	I1104 12:12:06.000526   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:06.000576   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.004775   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:06.004835   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:06.042011   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:06.042032   85759 cri.go:89] found id: ""
	I1104 12:12:06.042041   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:06.042102   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.047885   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:06.047952   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.084318   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:06.084341   85759 cri.go:89] found id: ""
	I1104 12:12:06.084349   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:06.084410   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.088487   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.088564   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.127693   85759 cri.go:89] found id: ""
	I1104 12:12:06.127721   85759 logs.go:282] 0 containers: []
	W1104 12:12:06.127730   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.127736   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:06.127780   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:06.165173   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.165199   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.165206   85759 cri.go:89] found id: ""
	I1104 12:12:06.165215   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:06.165302   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.169479   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.173154   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.173177   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.746303   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:02.758892   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:02.758967   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:02.792775   86402 cri.go:89] found id: ""
	I1104 12:12:02.792803   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.792815   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:02.792822   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:02.792878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:02.831073   86402 cri.go:89] found id: ""
	I1104 12:12:02.831097   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.831108   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:02.831115   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:02.831174   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:02.863530   86402 cri.go:89] found id: ""
	I1104 12:12:02.863557   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.863568   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:02.863574   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:02.863641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.894894   86402 cri.go:89] found id: ""
	I1104 12:12:02.894924   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.894934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:02.894942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.894996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.930052   86402 cri.go:89] found id: ""
	I1104 12:12:02.930081   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.930092   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:02.930100   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.930160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.964503   86402 cri.go:89] found id: ""
	I1104 12:12:02.964532   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.964544   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:02.964551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.964610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.998065   86402 cri.go:89] found id: ""
	I1104 12:12:02.998088   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.998096   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.998102   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:02.998148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:03.033579   86402 cri.go:89] found id: ""
	I1104 12:12:03.033604   86402 logs.go:282] 0 containers: []
	W1104 12:12:03.033613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:03.033621   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:03.033630   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:03.086215   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:03.086249   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:03.100100   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.100136   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:03.168116   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:03.168150   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:03.168165   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:03.253608   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:03.253642   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:05.792913   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.806494   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.806568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.854379   86402 cri.go:89] found id: ""
	I1104 12:12:05.854406   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.854417   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:05.854425   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.854503   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.886144   86402 cri.go:89] found id: ""
	I1104 12:12:05.886169   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.886179   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:05.886186   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.886248   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.917462   86402 cri.go:89] found id: ""
	I1104 12:12:05.917482   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.917492   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:05.917499   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.917550   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:05.954065   86402 cri.go:89] found id: ""
	I1104 12:12:05.954099   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.954110   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:05.954120   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:05.954194   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:05.990935   86402 cri.go:89] found id: ""
	I1104 12:12:05.990966   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.990977   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:05.990984   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:05.991050   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.032175   86402 cri.go:89] found id: ""
	I1104 12:12:06.032198   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.032206   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:06.032211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.032269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.069215   86402 cri.go:89] found id: ""
	I1104 12:12:06.069262   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.069275   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.069282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:06.069340   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:06.103065   86402 cri.go:89] found id: ""
	I1104 12:12:06.103106   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.103117   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:06.103127   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.103145   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:06.184111   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:06.184135   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.184149   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.272720   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.272760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.315596   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.315636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:06.376054   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.376110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.214237   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:08.707098   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:07.056531   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:09.056763   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:06.252295   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:06.252326   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:06.302739   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:06.302769   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:06.361279   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.361307   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.811335   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.811380   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.851356   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:06.851387   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.888753   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:06.888789   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.922406   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.922438   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.935028   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.935057   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:07.033975   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:07.034019   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:07.068680   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:07.068706   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:07.105150   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:07.105182   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:07.139258   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:07.139290   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.695630   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:12:09.701156   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:12:09.702414   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:09.702441   85759 api_server.go:131] duration metric: took 3.874707524s to wait for apiserver health ...
	I1104 12:12:09.702451   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:09.702475   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:09.702530   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:09.736867   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:09.736891   85759 cri.go:89] found id: ""
	I1104 12:12:09.736901   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:09.736956   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.741108   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:09.741176   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:09.780460   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:09.780483   85759 cri.go:89] found id: ""
	I1104 12:12:09.780490   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:09.780535   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.784698   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:09.784756   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:09.823042   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:09.823059   85759 cri.go:89] found id: ""
	I1104 12:12:09.823068   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:09.823123   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.826750   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:09.826803   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.859148   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:09.859175   85759 cri.go:89] found id: ""
	I1104 12:12:09.859185   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:09.859245   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.863676   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.863739   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.901737   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:09.901766   85759 cri.go:89] found id: ""
	I1104 12:12:09.901783   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:09.901843   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.905931   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.905993   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.942617   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.942637   85759 cri.go:89] found id: ""
	I1104 12:12:09.942644   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:09.942687   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.946420   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.946481   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.984891   85759 cri.go:89] found id: ""
	I1104 12:12:09.984921   85759 logs.go:282] 0 containers: []
	W1104 12:12:09.984932   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.984939   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:09.985000   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:10.018332   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.018357   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.018363   85759 cri.go:89] found id: ""
	I1104 12:12:10.018374   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:10.018434   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.022995   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.026853   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:10.026878   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:10.083384   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:10.083421   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:10.136576   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:10.136608   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.182808   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:10.182837   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.217017   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:10.217047   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:10.598972   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:10.599010   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:10.638827   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:10.638868   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:10.652880   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:10.652923   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:10.700645   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:10.700675   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:10.734860   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:10.734890   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:10.774613   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:10.774647   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:10.808375   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:10.808403   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:10.876130   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:10.876165   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:08.890463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:08.904272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:08.904354   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:08.935677   86402 cri.go:89] found id: ""
	I1104 12:12:08.935701   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.935710   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:08.935715   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:08.935761   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:08.966969   86402 cri.go:89] found id: ""
	I1104 12:12:08.966993   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.967004   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:08.967011   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:08.967072   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:08.998753   86402 cri.go:89] found id: ""
	I1104 12:12:08.998778   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.998786   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:08.998790   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:08.998852   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.031901   86402 cri.go:89] found id: ""
	I1104 12:12:09.031925   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.031934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:09.031940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.032000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.071478   86402 cri.go:89] found id: ""
	I1104 12:12:09.071500   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.071508   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:09.071513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.071564   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.107593   86402 cri.go:89] found id: ""
	I1104 12:12:09.107621   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.107629   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:09.107635   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.107693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.140899   86402 cri.go:89] found id: ""
	I1104 12:12:09.140923   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.140934   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.140942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:09.141000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:09.174279   86402 cri.go:89] found id: ""
	I1104 12:12:09.174307   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.174318   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:09.174330   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:09.174405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:09.226340   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:09.226371   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:09.239573   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:09.239600   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:09.306180   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:09.306201   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:09.306212   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:09.385039   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:09.385072   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:13.475909   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:13.475946   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.475954   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.475960   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.475965   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.475970   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.475975   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.475985   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.475994   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.476008   85759 system_pods.go:74] duration metric: took 3.773548162s to wait for pod list to return data ...
	I1104 12:12:13.476020   85759 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:13.478598   85759 default_sa.go:45] found service account: "default"
	I1104 12:12:13.478618   85759 default_sa.go:55] duration metric: took 2.591186ms for default service account to be created ...
	I1104 12:12:13.478628   85759 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:13.483285   85759 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:13.483308   85759 system_pods.go:89] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.483314   85759 system_pods.go:89] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.483318   85759 system_pods.go:89] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.483322   85759 system_pods.go:89] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.483325   85759 system_pods.go:89] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.483329   85759 system_pods.go:89] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.483336   85759 system_pods.go:89] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.483340   85759 system_pods.go:89] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.483347   85759 system_pods.go:126] duration metric: took 4.713256ms to wait for k8s-apps to be running ...
	I1104 12:12:13.483355   85759 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:13.483398   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:13.497748   85759 system_svc.go:56] duration metric: took 14.381722ms WaitForService to wait for kubelet
	I1104 12:12:13.497812   85759 kubeadm.go:582] duration metric: took 4m23.411218278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:13.497843   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:13.500813   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:13.500833   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:13.500843   85759 node_conditions.go:105] duration metric: took 2.993771ms to run NodePressure ...
	I1104 12:12:13.500854   85759 start.go:241] waiting for startup goroutines ...
	I1104 12:12:13.500860   85759 start.go:246] waiting for cluster config update ...
	I1104 12:12:13.500870   85759 start.go:255] writing updated cluster config ...
	I1104 12:12:13.501122   85759 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:13.548293   85759 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:13.550203   85759 out.go:177] * Done! kubectl is now configured to use "embed-certs-325116" cluster and "default" namespace by default
	I1104 12:12:10.707746   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:12.708477   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.555266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:13.555498   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.924105   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:11.936623   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:11.936685   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:11.968026   86402 cri.go:89] found id: ""
	I1104 12:12:11.968056   86402 logs.go:282] 0 containers: []
	W1104 12:12:11.968067   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:11.968074   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:11.968139   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:12.001193   86402 cri.go:89] found id: ""
	I1104 12:12:12.001218   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.001245   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:12.001252   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:12.001311   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:12.035167   86402 cri.go:89] found id: ""
	I1104 12:12:12.035190   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.035199   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:12.035204   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:12.035250   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:12.068412   86402 cri.go:89] found id: ""
	I1104 12:12:12.068440   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.068450   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:12.068458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:12.068515   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:12.099965   86402 cri.go:89] found id: ""
	I1104 12:12:12.099991   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.100002   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:12.100009   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:12.100066   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:12.133413   86402 cri.go:89] found id: ""
	I1104 12:12:12.133442   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.133453   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:12.133460   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:12.133520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:12.169007   86402 cri.go:89] found id: ""
	I1104 12:12:12.169036   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.169046   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:12.169053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:12.169112   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:12.200592   86402 cri.go:89] found id: ""
	I1104 12:12:12.200621   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.200635   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:12.200643   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:12.200657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:12.244609   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:12.244644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:12.299770   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:12.299804   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:12.324354   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:12.324395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:12.385605   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:12.385632   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:12.385661   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:14.964867   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:14.977918   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:14.977991   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:15.012865   86402 cri.go:89] found id: ""
	I1104 12:12:15.012894   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.012906   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:15.012913   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:15.012977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:15.046548   86402 cri.go:89] found id: ""
	I1104 12:12:15.046574   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.046583   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:15.046589   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:15.046636   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:15.079310   86402 cri.go:89] found id: ""
	I1104 12:12:15.079336   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.079347   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:15.079353   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:15.079412   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:15.110595   86402 cri.go:89] found id: ""
	I1104 12:12:15.110625   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.110636   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:15.110648   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:15.110716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:15.143362   86402 cri.go:89] found id: ""
	I1104 12:12:15.143391   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.143403   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:15.143410   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:15.143533   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:15.173973   86402 cri.go:89] found id: ""
	I1104 12:12:15.174000   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.174009   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:15.174017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:15.174081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:15.205021   86402 cri.go:89] found id: ""
	I1104 12:12:15.205049   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.205060   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:15.205067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:15.205113   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:15.240190   86402 cri.go:89] found id: ""
	I1104 12:12:15.240220   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.240231   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:15.240249   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:15.240263   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:15.290208   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:15.290241   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:15.305216   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:15.305258   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:15.375713   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:15.375735   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:15.375746   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:15.456517   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:15.456552   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:15.209380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:17.708299   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:16.056359   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:18.556166   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.050834   86301 pod_ready.go:82] duration metric: took 4m0.001048639s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:20.050863   86301 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:20.050874   86301 pod_ready.go:39] duration metric: took 4m5.585310983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:20.050889   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:20.050919   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:20.050968   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:20.088440   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.088466   86301 cri.go:89] found id: ""
	I1104 12:12:20.088476   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:20.088523   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.092502   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:20.092575   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:20.126599   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:20.126621   86301 cri.go:89] found id: ""
	I1104 12:12:20.126629   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:20.126687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.130617   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:20.130686   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:20.169664   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.169687   86301 cri.go:89] found id: ""
	I1104 12:12:20.169696   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:20.169750   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.173881   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:20.173920   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:20.209271   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.209292   86301 cri.go:89] found id: ""
	I1104 12:12:20.209299   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:20.209354   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.214187   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:20.214254   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:20.248683   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.248702   86301 cri.go:89] found id: ""
	I1104 12:12:20.248709   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:20.248757   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.252501   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:20.252574   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:20.286367   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:20.286406   86301 cri.go:89] found id: ""
	I1104 12:12:20.286415   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:20.286491   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:17.992855   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:18.011370   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:18.011446   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:18.054937   86402 cri.go:89] found id: ""
	I1104 12:12:18.054961   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.054968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:18.054974   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:18.055026   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:18.107769   86402 cri.go:89] found id: ""
	I1104 12:12:18.107802   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.107814   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:18.107821   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:18.107887   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:18.141932   86402 cri.go:89] found id: ""
	I1104 12:12:18.141959   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.141968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:18.141974   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:18.142021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:18.174322   86402 cri.go:89] found id: ""
	I1104 12:12:18.174345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.174353   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:18.174361   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:18.174514   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:18.206742   86402 cri.go:89] found id: ""
	I1104 12:12:18.206766   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.206776   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:18.206782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:18.206840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:18.240322   86402 cri.go:89] found id: ""
	I1104 12:12:18.240345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.240358   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:18.240363   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:18.240420   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:18.272081   86402 cri.go:89] found id: ""
	I1104 12:12:18.272110   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.272121   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:18.272128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:18.272211   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:18.308604   86402 cri.go:89] found id: ""
	I1104 12:12:18.308629   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.308637   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:18.308646   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:18.308655   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:18.392854   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:18.392892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:18.429632   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:18.429665   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:18.481082   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:18.481120   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:18.494730   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:18.494758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:18.562098   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.063223   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:21.075655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:21.075714   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:21.117762   86402 cri.go:89] found id: ""
	I1104 12:12:21.117794   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.117807   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:21.117817   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:21.117881   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:21.153256   86402 cri.go:89] found id: ""
	I1104 12:12:21.153281   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.153289   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:21.153295   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:21.153355   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:21.191477   86402 cri.go:89] found id: ""
	I1104 12:12:21.191519   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.191539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:21.191547   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:21.191618   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:21.228378   86402 cri.go:89] found id: ""
	I1104 12:12:21.228411   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.228424   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:21.228431   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:21.228495   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:21.265452   86402 cri.go:89] found id: ""
	I1104 12:12:21.265483   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.265493   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:21.265501   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:21.265561   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:21.301073   86402 cri.go:89] found id: ""
	I1104 12:12:21.301099   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.301108   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:21.301114   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:21.301182   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:21.337952   86402 cri.go:89] found id: ""
	I1104 12:12:21.337977   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.337986   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:21.337996   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:21.338053   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:21.371895   86402 cri.go:89] found id: ""
	I1104 12:12:21.371920   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.371929   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:21.371937   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:21.371950   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:21.429757   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:21.429789   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:21.444365   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.444418   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:21.510971   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.510990   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:21.511002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.593605   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:21.593639   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.208004   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:22.706901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:24.708795   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.290832   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:20.290885   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:20.324359   86301 cri.go:89] found id: ""
	I1104 12:12:20.324383   86301 logs.go:282] 0 containers: []
	W1104 12:12:20.324391   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:20.324397   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:20.324442   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:20.364466   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.364488   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:20.364492   86301 cri.go:89] found id: ""
	I1104 12:12:20.364500   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:20.364557   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.368440   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.371967   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:20.371991   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.405547   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:20.405572   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.446936   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:20.446962   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.485811   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:20.485838   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.530775   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:20.530803   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:20.599495   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:20.599542   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:20.614511   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:20.614543   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.659277   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:20.659316   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.694675   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:20.694707   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.187670   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.187705   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:21.308477   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:21.308501   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:21.365526   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:21.365562   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:21.431350   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:21.431381   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:23.969966   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:23.984866   86301 api_server.go:72] duration metric: took 4m16.75797908s to wait for apiserver process to appear ...
	I1104 12:12:23.984895   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:23.984937   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:23.984989   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:24.022326   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.022348   86301 cri.go:89] found id: ""
	I1104 12:12:24.022357   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:24.022428   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.027288   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:24.027377   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:24.064963   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.064986   86301 cri.go:89] found id: ""
	I1104 12:12:24.064993   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:24.065045   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.072027   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:24.072089   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:24.106618   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.106648   86301 cri.go:89] found id: ""
	I1104 12:12:24.106659   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:24.106719   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.110696   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:24.110762   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:24.148575   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:24.148600   86301 cri.go:89] found id: ""
	I1104 12:12:24.148621   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:24.148687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.152673   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:24.152741   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:24.187739   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:24.187763   86301 cri.go:89] found id: ""
	I1104 12:12:24.187771   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:24.187817   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.191551   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:24.191610   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:24.229634   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.229656   86301 cri.go:89] found id: ""
	I1104 12:12:24.229667   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:24.229720   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.234342   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:24.234426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:24.268339   86301 cri.go:89] found id: ""
	I1104 12:12:24.268363   86301 logs.go:282] 0 containers: []
	W1104 12:12:24.268370   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:24.268375   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:24.268426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:24.302347   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.302369   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.302374   86301 cri.go:89] found id: ""
	I1104 12:12:24.302382   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:24.302446   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.306761   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.310867   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:24.310888   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.353396   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:24.353421   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.408025   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:24.408054   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.446150   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:24.446177   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:24.495479   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:24.495505   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:24.568973   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:24.569008   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:24.585522   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:24.585552   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.630483   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:24.630516   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.675828   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:24.675865   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:25.094412   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:25.094457   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:25.191547   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:25.191576   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:25.227482   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:25.227509   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:25.261150   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:25.261184   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.130961   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:24.143387   86402 kubeadm.go:597] duration metric: took 4m4.25221988s to restartPrimaryControlPlane
	W1104 12:12:24.143472   86402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1104 12:12:24.143499   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:12:27.207964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:29.208705   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:27.799329   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:12:27.803543   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:12:27.804545   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:27.804568   86301 api_server.go:131] duration metric: took 3.819666619s to wait for apiserver health ...
	I1104 12:12:27.804576   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:27.804596   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:27.804639   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:27.842317   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:27.842339   86301 cri.go:89] found id: ""
	I1104 12:12:27.842348   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:27.842403   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.846107   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:27.846167   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:27.878833   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:27.878854   86301 cri.go:89] found id: ""
	I1104 12:12:27.878864   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:27.878923   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.882562   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:27.882614   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:27.914077   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:27.914098   86301 cri.go:89] found id: ""
	I1104 12:12:27.914106   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:27.914150   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.917756   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:27.917807   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:27.949534   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:27.949555   86301 cri.go:89] found id: ""
	I1104 12:12:27.949562   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:27.949606   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.953176   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:27.953235   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:27.984491   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:27.984509   86301 cri.go:89] found id: ""
	I1104 12:12:27.984516   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:27.984566   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.988283   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:27.988342   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:28.022752   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.022775   86301 cri.go:89] found id: ""
	I1104 12:12:28.022783   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:28.022829   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.026702   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:28.026767   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:28.062501   86301 cri.go:89] found id: ""
	I1104 12:12:28.062534   86301 logs.go:282] 0 containers: []
	W1104 12:12:28.062545   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:28.062556   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:28.062608   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:28.097167   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.097195   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.097201   86301 cri.go:89] found id: ""
	I1104 12:12:28.097211   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:28.097276   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.101192   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.104712   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:28.104731   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:28.118886   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:28.118911   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:28.220480   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:28.220512   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:28.264205   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:28.264239   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:28.299241   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:28.299274   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:28.339817   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:28.339847   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:28.377987   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:28.378014   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:28.416746   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:28.416772   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:28.484743   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:28.484777   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:28.532089   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:28.532128   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.589039   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:28.589072   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.623955   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:28.623987   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.657953   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:28.657986   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:31.547595   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:31.547624   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.547629   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.547633   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.547637   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.547640   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.547643   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.547649   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.547653   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.547661   86301 system_pods.go:74] duration metric: took 3.743079115s to wait for pod list to return data ...
	I1104 12:12:31.547667   86301 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:31.550088   86301 default_sa.go:45] found service account: "default"
	I1104 12:12:31.550108   86301 default_sa.go:55] duration metric: took 2.435317ms for default service account to be created ...
	I1104 12:12:31.550114   86301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:31.554898   86301 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:31.554924   86301 system_pods.go:89] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.554929   86301 system_pods.go:89] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.554933   86301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.554937   86301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.554941   86301 system_pods.go:89] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.554945   86301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.554952   86301 system_pods.go:89] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.554955   86301 system_pods.go:89] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.554962   86301 system_pods.go:126] duration metric: took 4.842911ms to wait for k8s-apps to be running ...
	I1104 12:12:31.554968   86301 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:31.555008   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:31.568927   86301 system_svc.go:56] duration metric: took 13.948557ms WaitForService to wait for kubelet
	I1104 12:12:31.568958   86301 kubeadm.go:582] duration metric: took 4m24.342075873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:31.568987   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:31.571962   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:31.571983   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:31.571993   86301 node_conditions.go:105] duration metric: took 3.000591ms to run NodePressure ...
	I1104 12:12:31.572004   86301 start.go:241] waiting for startup goroutines ...
	I1104 12:12:31.572010   86301 start.go:246] waiting for cluster config update ...
	I1104 12:12:31.572019   86301 start.go:255] writing updated cluster config ...
	I1104 12:12:31.572277   86301 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:31.620935   86301 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:31.623672   86301 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-036892" cluster and "default" namespace by default
	I1104 12:12:28.876306   86402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.732783523s)
	I1104 12:12:28.876377   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:28.890455   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:12:28.899660   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:12:28.908658   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:12:28.908675   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:12:28.908715   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:12:28.916955   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:12:28.917013   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:12:28.927198   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:12:28.936868   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:12:28.936924   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:12:28.947246   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.956962   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:12:28.957015   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.967293   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:12:28.976975   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:12:28.977030   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:12:28.988547   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:12:29.198333   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:12:31.709511   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:34.207341   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:36.707962   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:39.208138   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:41.208806   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:43.707896   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:46.207316   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:48.707107   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:50.707644   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:52.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:54.708517   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:57.206564   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:59.207122   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:01.207195   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:03.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:05.707763   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:07.708314   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:09.708374   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:10.702085   85500 pod_ready.go:82] duration metric: took 4m0.000587313s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:13:10.702115   85500 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:13:10.702126   85500 pod_ready.go:39] duration metric: took 4m5.542549912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:13:10.702144   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:13:10.702191   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:10.702246   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:10.743079   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:10.743102   85500 cri.go:89] found id: ""
	I1104 12:13:10.743110   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:10.743176   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.747213   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:10.747275   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:10.781435   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:10.781465   85500 cri.go:89] found id: ""
	I1104 12:13:10.781474   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:10.781597   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.785383   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:10.785453   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:10.825927   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:10.825956   85500 cri.go:89] found id: ""
	I1104 12:13:10.825965   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:10.826023   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.829834   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:10.829899   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:10.872447   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:10.872468   85500 cri.go:89] found id: ""
	I1104 12:13:10.872475   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:10.872524   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.876428   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:10.876483   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:10.911092   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:10.911125   85500 cri.go:89] found id: ""
	I1104 12:13:10.911134   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:10.911190   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.915021   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:10.915076   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:10.950838   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:10.950863   85500 cri.go:89] found id: ""
	I1104 12:13:10.950873   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:10.950935   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.954889   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:10.954938   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:10.991580   85500 cri.go:89] found id: ""
	I1104 12:13:10.991609   85500 logs.go:282] 0 containers: []
	W1104 12:13:10.991618   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:10.991625   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:10.991689   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:11.031428   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.031469   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.031474   85500 cri.go:89] found id: ""
	I1104 12:13:11.031484   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:11.031557   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.035810   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.039555   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:11.039582   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:11.076837   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:11.076865   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:11.114534   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:11.114561   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:11.148897   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:11.148935   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.184480   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:11.184511   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:11.256197   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:11.256237   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:11.368984   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:11.369014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:11.414219   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:11.414253   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:11.455746   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:11.455776   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.491699   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:11.491726   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:11.962368   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:11.962400   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:11.975564   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:11.975590   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:12.031427   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:12.031461   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:14.572933   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:13:14.588140   85500 api_server.go:72] duration metric: took 4m17.141131339s to wait for apiserver process to appear ...
	I1104 12:13:14.588168   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:13:14.588196   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:14.588243   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:14.621509   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:14.621534   85500 cri.go:89] found id: ""
	I1104 12:13:14.621543   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:14.621601   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.626328   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:14.626384   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:14.662052   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:14.662079   85500 cri.go:89] found id: ""
	I1104 12:13:14.662115   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:14.662174   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.666018   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:14.666089   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:14.702872   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:14.702897   85500 cri.go:89] found id: ""
	I1104 12:13:14.702910   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:14.702968   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.706809   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:14.706883   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:14.744985   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:14.745005   85500 cri.go:89] found id: ""
	I1104 12:13:14.745012   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:14.745058   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.749441   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:14.749497   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:14.781617   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:14.781644   85500 cri.go:89] found id: ""
	I1104 12:13:14.781653   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:14.781709   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.785971   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:14.786046   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:14.819002   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:14.819029   85500 cri.go:89] found id: ""
	I1104 12:13:14.819038   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:14.819101   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.823075   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:14.823143   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:14.858936   85500 cri.go:89] found id: ""
	I1104 12:13:14.858965   85500 logs.go:282] 0 containers: []
	W1104 12:13:14.858977   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:14.858984   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:14.859048   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:14.898303   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:14.898327   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:14.898333   85500 cri.go:89] found id: ""
	I1104 12:13:14.898341   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:14.898402   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.902325   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.905855   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:14.905880   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:14.973356   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:14.973389   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:14.988655   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:14.988696   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:15.023407   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:15.023443   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:15.078974   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:15.079007   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:15.114147   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:15.114180   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:15.559434   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:15.559477   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:15.666481   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:15.666509   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:15.728066   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:15.728101   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:15.769721   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:15.769759   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:15.802131   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:15.802170   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:15.837613   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:15.837639   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:15.874374   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:15.874407   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:18.413199   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:13:18.418522   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:13:18.419487   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:13:18.419512   85500 api_server.go:131] duration metric: took 3.831337085s to wait for apiserver health ...
	I1104 12:13:18.419521   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:13:18.419549   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:18.419605   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:18.453835   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:18.453856   85500 cri.go:89] found id: ""
	I1104 12:13:18.453865   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:18.453927   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.458136   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:18.458198   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:18.496587   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:18.496623   85500 cri.go:89] found id: ""
	I1104 12:13:18.496634   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:18.496691   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.500451   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:18.500523   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:18.532756   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:18.532785   85500 cri.go:89] found id: ""
	I1104 12:13:18.532795   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:18.532857   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.537239   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:18.537293   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:18.569348   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:18.569374   85500 cri.go:89] found id: ""
	I1104 12:13:18.569382   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:18.569440   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.573491   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:18.573563   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:18.606857   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:18.606886   85500 cri.go:89] found id: ""
	I1104 12:13:18.606896   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:18.606951   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.611158   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:18.611229   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:18.645448   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:18.645467   85500 cri.go:89] found id: ""
	I1104 12:13:18.645474   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:18.645527   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.649014   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:18.649062   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:18.693641   85500 cri.go:89] found id: ""
	I1104 12:13:18.693668   85500 logs.go:282] 0 containers: []
	W1104 12:13:18.693676   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:18.693681   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:18.693728   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:18.733668   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:18.733690   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:18.733695   85500 cri.go:89] found id: ""
	I1104 12:13:18.733702   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:18.733745   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.737419   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.740993   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:18.741014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:19.135942   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:19.135980   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:19.206586   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:19.206623   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:19.222135   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:19.222164   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:19.262746   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:19.262774   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:19.298259   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:19.298287   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:19.338304   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:19.338332   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:19.375163   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:19.375195   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:19.478206   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:19.478234   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:19.526261   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:19.526291   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:19.559922   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:19.559954   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:19.609848   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:19.609879   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:19.648804   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:19.648829   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:22.210690   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:13:22.210718   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.210723   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.210727   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.210730   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.210733   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.210737   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.210752   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.210758   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.210768   85500 system_pods.go:74] duration metric: took 3.791240483s to wait for pod list to return data ...
	I1104 12:13:22.210780   85500 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:13:22.213688   85500 default_sa.go:45] found service account: "default"
	I1104 12:13:22.213709   85500 default_sa.go:55] duration metric: took 2.921691ms for default service account to be created ...
	I1104 12:13:22.213717   85500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:13:22.219436   85500 system_pods.go:86] 8 kube-system pods found
	I1104 12:13:22.219466   85500 system_pods.go:89] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.219475   85500 system_pods.go:89] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.219480   85500 system_pods.go:89] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.219489   85500 system_pods.go:89] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.219495   85500 system_pods.go:89] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.219501   85500 system_pods.go:89] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.219512   85500 system_pods.go:89] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.219523   85500 system_pods.go:89] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.219537   85500 system_pods.go:126] duration metric: took 5.813462ms to wait for k8s-apps to be running ...
	I1104 12:13:22.219551   85500 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:13:22.219612   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:13:22.232887   85500 system_svc.go:56] duration metric: took 13.328078ms WaitForService to wait for kubelet
	I1104 12:13:22.232918   85500 kubeadm.go:582] duration metric: took 4m24.785911082s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:13:22.232941   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:13:22.235641   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:13:22.235662   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:13:22.235675   85500 node_conditions.go:105] duration metric: took 2.728232ms to run NodePressure ...
	I1104 12:13:22.235687   85500 start.go:241] waiting for startup goroutines ...
	I1104 12:13:22.235695   85500 start.go:246] waiting for cluster config update ...
	I1104 12:13:22.235707   85500 start.go:255] writing updated cluster config ...
	I1104 12:13:22.235962   85500 ssh_runner.go:195] Run: rm -f paused
	I1104 12:13:22.284583   85500 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:13:22.287448   85500 out.go:177] * Done! kubectl is now configured to use "no-preload-908370" cluster and "default" namespace by default
	I1104 12:14:25.090113   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:14:25.090254   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:14:25.091997   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.092065   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.092204   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.092341   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.092480   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:25.092569   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:25.094485   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:25.094582   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:25.094664   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:25.094799   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:25.094891   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:25.095003   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:25.095086   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:25.095186   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:25.095240   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:25.095319   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:25.095403   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:25.095481   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:25.095554   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:25.095614   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:25.095676   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:25.095752   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:25.095828   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:25.095970   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:25.096102   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:25.096169   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:25.096262   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:25.097799   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:25.097920   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:25.098018   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:25.098126   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:25.098211   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:25.098333   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:14:25.098393   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:14:25.098487   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098633   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.098690   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098940   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099074   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099307   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099370   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099528   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099582   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099740   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099758   86402 kubeadm.go:310] 
	I1104 12:14:25.099815   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:14:25.099880   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:14:25.099889   86402 kubeadm.go:310] 
	I1104 12:14:25.099923   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:14:25.099952   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:14:25.100036   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:14:25.100044   86402 kubeadm.go:310] 
	I1104 12:14:25.100197   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:14:25.100237   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:14:25.100267   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:14:25.100273   86402 kubeadm.go:310] 
	I1104 12:14:25.100367   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:14:25.100454   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:14:25.100468   86402 kubeadm.go:310] 
	I1104 12:14:25.100600   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:14:25.100718   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:14:25.100821   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:14:25.100903   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:14:25.100970   86402 kubeadm.go:310] 
	W1104 12:14:25.101033   86402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:14:25.101071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:14:25.536184   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:14:25.550453   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:14:25.560308   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:14:25.560327   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:14:25.560368   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:14:25.569106   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:14:25.569189   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:14:25.578395   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:14:25.587402   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:14:25.587473   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:14:25.596827   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.605359   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:14:25.605420   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.614266   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:14:25.622522   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:14:25.622582   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:14:25.631876   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:14:25.701080   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.701168   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.833997   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.834138   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.834258   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:26.009165   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:26.011976   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:26.012090   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:26.012183   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:26.012333   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:26.012422   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:26.012532   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:26.012619   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:26.012689   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:26.012748   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:26.012851   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:26.012978   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:26.013025   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:26.013102   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:26.399153   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:26.470449   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:27.078991   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:27.181622   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:27.205149   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:27.205300   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:27.205383   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:27.355614   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:27.357678   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:27.357840   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:27.363942   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:27.365004   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:27.367237   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:27.368087   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:15:07.369845   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:15:07.370222   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:07.370464   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:12.370802   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:12.371041   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:22.371417   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:22.371584   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:42.371725   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:42.371932   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.370871   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:16:22.371150   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.371181   86402 kubeadm.go:310] 
	I1104 12:16:22.371222   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:16:22.371297   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:16:22.371309   86402 kubeadm.go:310] 
	I1104 12:16:22.371371   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:16:22.371435   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:16:22.371576   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:16:22.371588   86402 kubeadm.go:310] 
	I1104 12:16:22.371726   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:16:22.371784   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:16:22.371863   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:16:22.371879   86402 kubeadm.go:310] 
	I1104 12:16:22.372004   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:16:22.372155   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:16:22.372172   86402 kubeadm.go:310] 
	I1104 12:16:22.372338   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:16:22.372435   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:16:22.372566   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:16:22.372680   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:16:22.372718   86402 kubeadm.go:310] 
	I1104 12:16:22.372948   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:16:22.373110   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:16:22.373289   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:16:22.373328   86402 kubeadm.go:394] duration metric: took 8m2.53443537s to StartCluster
	I1104 12:16:22.373379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:16:22.373431   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:16:22.410373   86402 cri.go:89] found id: ""
	I1104 12:16:22.410409   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.410418   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:16:22.410424   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:16:22.410485   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:16:22.447939   86402 cri.go:89] found id: ""
	I1104 12:16:22.447963   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.447971   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:16:22.447977   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:16:22.448021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:16:22.479234   86402 cri.go:89] found id: ""
	I1104 12:16:22.479263   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.479274   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:16:22.479280   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:16:22.479341   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:16:22.512783   86402 cri.go:89] found id: ""
	I1104 12:16:22.512814   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.512825   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:16:22.512832   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:16:22.512895   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:16:22.549483   86402 cri.go:89] found id: ""
	I1104 12:16:22.549510   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.549520   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:16:22.549527   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:16:22.549593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:16:22.582339   86402 cri.go:89] found id: ""
	I1104 12:16:22.582382   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.582393   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:16:22.582402   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:16:22.582471   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:16:22.613545   86402 cri.go:89] found id: ""
	I1104 12:16:22.613574   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.613585   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:16:22.613593   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:16:22.613656   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:16:22.644488   86402 cri.go:89] found id: ""
	I1104 12:16:22.644517   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.644528   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:16:22.644539   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:16:22.644551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:16:22.681138   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:16:22.681169   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:16:22.734551   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:16:22.734586   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:16:22.750140   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:16:22.750178   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:16:22.837631   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:16:22.837657   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:16:22.837673   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1104 12:16:22.961154   86402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:16:22.961221   86402 out.go:270] * 
	W1104 12:16:22.961295   86402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.961310   86402 out.go:270] * 
	W1104 12:16:22.962053   86402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:16:22.965021   86402 out.go:201] 
	W1104 12:16:22.966262   86402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.966326   86402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:16:22.966377   86402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:16:22.967953   86402 out.go:201] 
	
	
	==> CRI-O <==
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.593801825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d9f7556-4e29-4a4d-8220-7280999a6d3a name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.594680913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8649750f-5de6-42cc-8fce-bff2cf7a7856 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.595200449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722875595120396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8649750f-5de6-42cc-8fce-bff2cf7a7856 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.595694052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ad76914-b2db-48d5-9b09-b7e4a1e05255 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.595764292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ad76914-b2db-48d5-9b09-b7e4a1e05255 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.595965072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ad76914-b2db-48d5-9b09-b7e4a1e05255 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.601195657Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a4c411c4-0b7a-40f7-aec6-9fd328ebbb31 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.601417706Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&PodSandboxMetadata{Name:busybox,Uid:faedbe05-e667-443f-9df2-18bb9bf19f99,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722075694828195,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.818698038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mf8xg,Uid:c0162005-7971-4161-9575-9f36c13d54f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722075597183
268,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.818743269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f19fe0600c083a72986d7a4012e850ad00dc9dbd8a51efa5f384b6cc7382869,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-knfd4,Uid:5b3ef856-5b69-44b1-ae29-4a58bf235e41,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722073897454248,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-knfd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3ef856-5b69-44b1-ae29-4a58bf235e41,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.
818693207Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&PodSandboxMetadata{Name:kube-proxy-phzgx,Uid:4ea64f2c-7568-486d-9941-f89ed4221f35,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722068131535157,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221f35,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.818745955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722068127073847,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-11-04T12:07:47.818748840Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-325116,Uid:1dc053128fa3b82a73e126c6c1d3a428,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062328743897,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.47:2379,kubernetes.io/config.hash: 1dc053128fa3b82a73e126c6c1d3a428,kubernetes.io/config.seen: 2024-11-04T12:07:41.869744617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-3251
16,Uid:05da92dcb57907443316e8d42e4f92f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062323934763,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05da92dcb57907443316e8d42e4f92f6,kubernetes.io/config.seen: 2024-11-04T12:07:41.819255692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-325116,Uid:c2734426f909645ac2df56eef2ee66f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062322300320,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.47:8443,kubernetes.io/config.hash: c2734426f909645ac2df56eef2ee66f9,kubernetes.io/config.seen: 2024-11-04T12:07:41.819249651Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-325116,Uid:e356f340fd1b91ab3c1748076b1b8c75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062316826738,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e356f340fd1b91ab3c1748076b1b
8c75,kubernetes.io/config.seen: 2024-11-04T12:07:41.819254208Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a4c411c4-0b7a-40f7-aec6-9fd328ebbb31 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.601985251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66817f84-7681-495a-86d3-f41a523fc841 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.602048057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66817f84-7681-495a-86d3-f41a523fc841 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.602269357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66817f84-7681-495a-86d3-f41a523fc841 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.631799161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=968e67d1-3a96-4360-a4a7-49273ea23996 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.631884555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=968e67d1-3a96-4360-a4a7-49273ea23996 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.633308240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a5598c9-cb8d-4c49-979c-cfac779eb9f4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.633684706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722875633664501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a5598c9-cb8d-4c49-979c-cfac779eb9f4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.634220786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c37ad7bb-f789-4dec-8196-651fe90d8653 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.634284214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c37ad7bb-f789-4dec-8196-651fe90d8653 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.634480617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c37ad7bb-f789-4dec-8196-651fe90d8653 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.667009489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3dfb7cae-74d1-457e-bd77-424862e6e60c name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.667107183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3dfb7cae-74d1-457e-bd77-424862e6e60c name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.668513450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f82567eb-6dcb-4558-8a29-80cb0c61bff0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.669066992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722875669032407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f82567eb-6dcb-4558-8a29-80cb0c61bff0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.669703359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=777bdf2c-4b0b-4456-aa6e-3eb90ea42b6a name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.669797266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=777bdf2c-4b0b-4456-aa6e-3eb90ea42b6a name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:15 embed-certs-325116 crio[700]: time="2024-11-04 12:21:15.670019879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=777bdf2c-4b0b-4456-aa6e-3eb90ea42b6a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	95a9eb50a127a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   336518a304965       storage-provisioner
	17930a8c9f8fe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   253c7105adc50       busybox
	d1f0c1ed5e891       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   586d31f237777       coredns-7c65d6cfc9-mf8xg
	c7558f4e10871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   336518a304965       storage-provisioner
	512d8563ff2ef       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   aca6b94caae07       kube-proxy-phzgx
	5b575c045ea6e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   68350a02deb9f       etcd-embed-certs-325116
	6e7999c6e5a24       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   9ae27a866cd67       kube-apiserver-embed-certs-325116
	a5a0cb5f09f99       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   0b2c49eb74407       kube-scheduler-embed-certs-325116
	5751adaa2cf78       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   61b4c93a5104c       kube-controller-manager-embed-certs-325116
	
	
	==> coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40946 - 37698 "HINFO IN 7585893187643998144.4477262375756637392. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020423582s
	
	
	==> describe nodes <==
	Name:               embed-certs-325116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-325116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=embed-certs-325116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T11_59_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 11:59:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-325116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 12:21:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 12:18:31 +0000   Mon, 04 Nov 2024 11:59:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 12:18:31 +0000   Mon, 04 Nov 2024 11:59:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 12:18:31 +0000   Mon, 04 Nov 2024 11:59:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 12:18:31 +0000   Mon, 04 Nov 2024 12:07:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    embed-certs-325116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14a53ffef4d24b9fac22919b5bf74740
	  System UUID:                14a53ffe-f4d2-4b9f-ac22-919b5bf74740
	  Boot ID:                    ce287235-6473-48ce-bd28-1f33727daed3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-mf8xg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-325116                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-325116             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-325116    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-phzgx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-325116             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-knfd4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-325116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-325116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-325116 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-325116 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-325116 event: Registered Node embed-certs-325116 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-325116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-325116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-325116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-325116 event: Registered Node embed-certs-325116 in Controller
	
	
	==> dmesg <==
	[Nov 4 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047803] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036594] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.786556] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.902099] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.528907] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.035515] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.054885] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053207] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.187720] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.131084] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.271873] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +3.915390] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +1.600210] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.059958] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.486181] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.986083] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +3.751816] kauditd_printk_skb: 64 callbacks suppressed
	[Nov 4 12:08] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] <==
	{"level":"warn","ts":"2024-11-04T12:08:20.284473Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.268987Z","time spent":"1.015450337s","remote":"127.0.0.1:50250","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4106,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:540 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4052 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2024-11-04T12:08:20.284708Z","caller":"traceutil/trace.go:171","msg":"trace[225642940] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:590; }","duration":"893.916381ms","start":"2024-11-04T12:08:19.390786Z","end":"2024-11-04T12:08:20.284702Z","steps":["trace[225642940] 'agreement among raft nodes before linearized reading'  (duration: 893.835384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.284896Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.390740Z","time spent":"894.144851ms","remote":"127.0.0.1:50078","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.284986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"588.383471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-11-04T12:08:20.285027Z","caller":"traceutil/trace.go:171","msg":"trace[1743712310] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:590; }","duration":"588.425986ms","start":"2024-11-04T12:08:19.696594Z","end":"2024-11-04T12:08:20.285020Z","steps":["trace[1743712310] 'agreement among raft nodes before linearized reading'  (duration: 588.363276ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.285063Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.696553Z","time spent":"588.504074ms","remote":"127.0.0.1:50236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1142,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.285245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"625.294499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" ","response":"range_response_count:1 size:766"}
	{"level":"info","ts":"2024-11-04T12:08:20.285284Z","caller":"traceutil/trace.go:171","msg":"trace[567260375] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.1804c28d2aa51540; range_end:; response_count:1; response_revision:590; }","duration":"625.375971ms","start":"2024-11-04T12:08:19.659901Z","end":"2024-11-04T12:08:20.285277Z","steps":["trace[567260375] 'agreement among raft nodes before linearized reading'  (duration: 625.211838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.285304Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.659887Z","time spent":"625.411945ms","remote":"127.0.0.1:50152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":790,"request content":"key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.681378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.414124ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13046526760410608442 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" mod_revision:505 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" value_size:668 lease:3823154723555831928 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-04T12:08:20.681536Z","caller":"traceutil/trace.go:171","msg":"trace[1578497151] linearizableReadLoop","detail":"{readStateIndex:630; appliedIndex:629; }","duration":"390.360247ms","start":"2024-11-04T12:08:20.291162Z","end":"2024-11-04T12:08:20.681522Z","steps":["trace[1578497151] 'read index received'  (duration: 123.674841ms)","trace[1578497151] 'applied index is now lower than readState.Index'  (duration: 266.684382ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-04T12:08:20.681569Z","caller":"traceutil/trace.go:171","msg":"trace[610576868] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"391.236232ms","start":"2024-11-04T12:08:20.290320Z","end":"2024-11-04T12:08:20.681556Z","steps":["trace[610576868] 'process raft request'  (duration: 124.592168ms)","trace[610576868] 'compare'  (duration: 266.270817ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T12:08:20.681666Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.290299Z","time spent":"391.320764ms","remote":"127.0.0.1:50152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":751,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" mod_revision:505 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" value_size:668 lease:3823154723555831928 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" > >"}
	{"level":"warn","ts":"2024-11-04T12:08:20.681740Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.572432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:08:20.681777Z","caller":"traceutil/trace.go:171","msg":"trace[1183697477] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:591; }","duration":"390.61325ms","start":"2024-11-04T12:08:20.291158Z","end":"2024-11-04T12:08:20.681771Z","steps":["trace[1183697477] 'agreement among raft nodes before linearized reading'  (duration: 390.479384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.681846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.291093Z","time spent":"390.746632ms","remote":"127.0.0.1:50084","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.681957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.743758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-325116\" ","response":"range_response_count:1 size:5752"}
	{"level":"info","ts":"2024-11-04T12:08:20.682943Z","caller":"traceutil/trace.go:171","msg":"trace[1992447961] range","detail":"{range_begin:/registry/minions/embed-certs-325116; range_end:; response_count:1; response_revision:591; }","duration":"391.725677ms","start":"2024-11-04T12:08:20.291207Z","end":"2024-11-04T12:08:20.682932Z","steps":["trace[1992447961] 'agreement among raft nodes before linearized reading'  (duration: 390.692407ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.683881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.291186Z","time spent":"392.68281ms","remote":"127.0.0.1:50240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5776,"request content":"key:\"/registry/minions/embed-certs-325116\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.683031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.789202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-knfd4\" ","response":"range_response_count:1 size:4340"}
	{"level":"info","ts":"2024-11-04T12:08:20.684311Z","caller":"traceutil/trace.go:171","msg":"trace[1716411033] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-knfd4; range_end:; response_count:1; response_revision:591; }","duration":"393.070573ms","start":"2024-11-04T12:08:20.291231Z","end":"2024-11-04T12:08:20.684301Z","steps":["trace[1716411033] 'agreement among raft nodes before linearized reading'  (duration: 391.21946ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.684839Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.291215Z","time spent":"393.611864ms","remote":"127.0.0.1:50250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4364,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-knfd4\" "}
	{"level":"info","ts":"2024-11-04T12:17:45.942110Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":820}
	{"level":"info","ts":"2024-11-04T12:17:45.951356Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":820,"took":"8.984362ms","hash":2746580650,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2609152,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-11-04T12:17:45.951414Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2746580650,"revision":820,"compact-revision":-1}
	
	
	==> kernel <==
	 12:21:15 up 13 min,  0 users,  load average: 0.03, 0.10, 0.10
	Linux embed-certs-325116 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1104 12:17:48.119956       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:17:48.120036       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1104 12:17:48.121014       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:17:48.121085       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:18:48.121827       1 handler_proxy.go:99] no RequestInfo found in the context
	W1104 12:18:48.121825       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:18:48.122003       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1104 12:18:48.122051       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:18:48.123211       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:18:48.123243       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:20:48.124214       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:20:48.124287       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1104 12:20:48.124422       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:20:48.124542       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:20:48.125452       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:20:48.125679       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] <==
	E1104 12:15:50.780981       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:15:51.241423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:16:20.785615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:16:21.248588       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:16:50.792233       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:16:51.255761       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:17:20.801024       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:17:21.263022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:17:50.807366       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:17:51.269769       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:18:20.812660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:18:21.276558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:18:31.308285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-325116"
	I1104 12:18:40.902545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="173.143µs"
	E1104 12:18:50.817796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:18:51.282739       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:18:53.903556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="91.889µs"
	E1104 12:19:20.824953       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:19:21.290551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:19:50.831102       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:19:51.297382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:20:20.836667       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:20:21.304891       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:20:50.842763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:20:51.311435       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 12:07:48.514943       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 12:07:48.528426       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.47"]
	E1104 12:07:48.528639       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 12:07:48.603457       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 12:07:48.603577       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 12:07:48.603656       1 server_linux.go:169] "Using iptables Proxier"
	I1104 12:07:48.607344       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 12:07:48.607632       1 server.go:483] "Version info" version="v1.31.2"
	I1104 12:07:48.607643       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:07:48.608451       1 config.go:199] "Starting service config controller"
	I1104 12:07:48.608537       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 12:07:48.608502       1 config.go:105] "Starting endpoint slice config controller"
	I1104 12:07:48.608660       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 12:07:48.608791       1 config.go:328] "Starting node config controller"
	I1104 12:07:48.608812       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 12:07:48.709462       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 12:07:48.709504       1 shared_informer.go:320] Caches are synced for service config
	I1104 12:07:48.709576       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] <==
	I1104 12:07:44.389836       1 serving.go:386] Generated self-signed cert in-memory
	W1104 12:07:47.075348       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 12:07:47.075394       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 12:07:47.075435       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 12:07:47.075443       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 12:07:47.118024       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1104 12:07:47.118059       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:07:47.120160       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 12:07:47.120289       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 12:07:47.120411       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1104 12:07:47.120612       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1104 12:07:47.221791       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 12:20:02 embed-certs-325116 kubelet[906]: E1104 12:20:02.048314     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722802047860988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:12 embed-certs-325116 kubelet[906]: E1104 12:20:12.049941     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722812049603704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:12 embed-certs-325116 kubelet[906]: E1104 12:20:12.049978     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722812049603704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:12 embed-certs-325116 kubelet[906]: E1104 12:20:12.889391     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:20:22 embed-certs-325116 kubelet[906]: E1104 12:20:22.051726     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722822051502654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:22 embed-certs-325116 kubelet[906]: E1104 12:20:22.051802     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722822051502654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:24 embed-certs-325116 kubelet[906]: E1104 12:20:24.888979     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:20:32 embed-certs-325116 kubelet[906]: E1104 12:20:32.052682     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722832052468410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:32 embed-certs-325116 kubelet[906]: E1104 12:20:32.052722     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722832052468410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:39 embed-certs-325116 kubelet[906]: E1104 12:20:39.889530     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:20:41 embed-certs-325116 kubelet[906]: E1104 12:20:41.915646     906 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 12:20:41 embed-certs-325116 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 12:20:41 embed-certs-325116 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 12:20:41 embed-certs-325116 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 12:20:41 embed-certs-325116 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 12:20:42 embed-certs-325116 kubelet[906]: E1104 12:20:42.055623     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722842055194235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:42 embed-certs-325116 kubelet[906]: E1104 12:20:42.055648     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722842055194235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:52 embed-certs-325116 kubelet[906]: E1104 12:20:52.056688     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722852056464825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:52 embed-certs-325116 kubelet[906]: E1104 12:20:52.056731     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722852056464825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:52 embed-certs-325116 kubelet[906]: E1104 12:20:52.889409     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:21:02 embed-certs-325116 kubelet[906]: E1104 12:21:02.058234     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722862057879826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:02 embed-certs-325116 kubelet[906]: E1104 12:21:02.058664     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722862057879826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:03 embed-certs-325116 kubelet[906]: E1104 12:21:03.890454     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:21:12 embed-certs-325116 kubelet[906]: E1104 12:21:12.060878     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722872060015782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:12 embed-certs-325116 kubelet[906]: E1104 12:21:12.061392     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722872060015782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] <==
	I1104 12:08:19.682687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 12:08:19.694503       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 12:08:19.694582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 12:08:37.687106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 12:08:37.687318       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-325116_2b58ea8e-9e9e-47f4-91d4-f8a31f78c568!
	I1104 12:08:37.687310       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad2eac65-348b-49fe-a8c6-4504e588ecb5", APIVersion:"v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-325116_2b58ea8e-9e9e-47f4-91d4-f8a31f78c568 became leader
	I1104 12:08:37.788414       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-325116_2b58ea8e-9e9e-47f4-91d4-f8a31f78c568!
	
	
	==> storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] <==
	I1104 12:07:48.416484       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1104 12:08:18.423015       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-325116 -n embed-certs-325116
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-325116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-knfd4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-325116 describe pod metrics-server-6867b74b74-knfd4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-325116 describe pod metrics-server-6867b74b74-knfd4: exit status 1 (66.855435ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-knfd4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-325116 describe pod metrics-server-6867b74b74-knfd4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1104 12:12:46.500843   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:12:50.483713   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:13:01.267627   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:13:20.020115   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-11-04 12:21:32.169688474 +0000 UTC m=+6283.524747262
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-036892 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-036892 logs -n 25: (1.947797461s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:04:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:04:21.684777   86402 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:04:21.684885   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.684893   86402 out.go:358] Setting ErrFile to fd 2...
	I1104 12:04:21.684897   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.685085   86402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:04:21.685618   86402 out.go:352] Setting JSON to false
	I1104 12:04:21.686501   86402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10013,"bootTime":1730711849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:04:21.686603   86402 start.go:139] virtualization: kvm guest
	I1104 12:04:21.688652   86402 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:04:21.690121   86402 notify.go:220] Checking for updates...
	I1104 12:04:21.690173   86402 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:04:21.691712   86402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:04:21.693100   86402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:04:21.694334   86402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:04:21.695431   86402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:04:21.696680   86402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:04:21.698271   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:04:21.698697   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.698738   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.713382   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I1104 12:04:21.713861   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.714357   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.714378   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.714696   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.714872   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.716711   86402 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 12:04:21.718136   86402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:04:21.718573   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.718617   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.733074   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1104 12:04:21.733525   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.733939   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.733955   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.734252   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.734410   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.770049   86402 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:04:21.771735   86402 start.go:297] selected driver: kvm2
	I1104 12:04:21.771748   86402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.771851   86402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:04:21.772615   86402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.772709   86402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:04:21.787662   86402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:04:21.788158   86402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:04:21.788201   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:04:21.788238   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:04:21.788282   86402 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.788422   86402 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.790364   86402 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 12:04:20.849476   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:20.393451   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:04:20.393484   86301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:20.393492   86301 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:20.393580   86301 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:20.393594   86301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:04:20.393670   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:04:20.393863   86301 start.go:360] acquireMachinesLock for default-k8s-diff-port-036892: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:21.791568   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:04:21.791599   86402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:21.791608   86402 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:21.791668   86402 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:21.791678   86402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 12:04:21.791755   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:04:21.791918   86402 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:26.929512   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:30.001546   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:36.081486   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:39.153496   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:45.233535   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:48.305510   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:54.385555   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:57.457513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:03.537513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:06.609487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:12.689475   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:15.761508   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:21.841502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:24.913609   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:30.993499   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:34.065502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:40.145511   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:43.217478   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:49.297518   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:52.369526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:58.449509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:01.521498   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:07.601506   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:10.673509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:16.753487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:19.825549   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:25.905526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:28.977526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:35.057466   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:38.129670   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:44.209517   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:47.281541   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:53.361542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:56.433564   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:02.513462   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:05.585513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:11.665480   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:14.737542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:17.742001   85759 start.go:364] duration metric: took 4m26.438155925s to acquireMachinesLock for "embed-certs-325116"
	I1104 12:07:17.742060   85759 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:17.742068   85759 fix.go:54] fixHost starting: 
	I1104 12:07:17.742418   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:17.742470   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:17.758611   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I1104 12:07:17.759173   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:17.759750   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:17.759774   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:17.760116   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:17.760326   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:17.760498   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:17.762313   85759 fix.go:112] recreateIfNeeded on embed-certs-325116: state=Stopped err=<nil>
	I1104 12:07:17.762335   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	W1104 12:07:17.762469   85759 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:17.764411   85759 out.go:177] * Restarting existing kvm2 VM for "embed-certs-325116" ...
	I1104 12:07:17.739255   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:17.739306   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739691   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:07:17.739718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739888   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:07:17.741864   85500 machine.go:96] duration metric: took 4m37.421766695s to provisionDockerMachine
	I1104 12:07:17.741908   85500 fix.go:56] duration metric: took 4m37.442993443s for fixHost
	I1104 12:07:17.741918   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 4m37.443015642s
	W1104 12:07:17.741938   85500 start.go:714] error starting host: provision: host is not running
	W1104 12:07:17.742034   85500 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1104 12:07:17.742044   85500 start.go:729] Will try again in 5 seconds ...
	I1104 12:07:17.765958   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Start
	I1104 12:07:17.766220   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring networks are active...
	I1104 12:07:17.767191   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network default is active
	I1104 12:07:17.767589   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network mk-embed-certs-325116 is active
	I1104 12:07:17.767984   85759 main.go:141] libmachine: (embed-certs-325116) Getting domain xml...
	I1104 12:07:17.768804   85759 main.go:141] libmachine: (embed-certs-325116) Creating domain...
	I1104 12:07:18.996135   85759 main.go:141] libmachine: (embed-certs-325116) Waiting to get IP...
	I1104 12:07:18.997002   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:18.997542   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:18.997615   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:18.997513   87021 retry.go:31] will retry after 239.606839ms: waiting for machine to come up
	I1104 12:07:19.239054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.239579   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.239602   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.239528   87021 retry.go:31] will retry after 303.459257ms: waiting for machine to come up
	I1104 12:07:19.545134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.545597   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.545633   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.545544   87021 retry.go:31] will retry after 394.511523ms: waiting for machine to come up
	I1104 12:07:19.942226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.942607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.942630   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.942576   87021 retry.go:31] will retry after 381.618515ms: waiting for machine to come up
	I1104 12:07:20.326265   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.326707   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.326738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.326651   87021 retry.go:31] will retry after 584.226748ms: waiting for machine to come up
	I1104 12:07:20.912117   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.912575   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.912607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.912524   87021 retry.go:31] will retry after 770.080519ms: waiting for machine to come up
	I1104 12:07:22.742250   85500 start.go:360] acquireMachinesLock for no-preload-908370: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:07:21.684620   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:21.685074   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:21.685103   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:21.685026   87021 retry.go:31] will retry after 1.170018806s: waiting for machine to come up
	I1104 12:07:22.856736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:22.857104   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:22.857132   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:22.857048   87021 retry.go:31] will retry after 1.467304538s: waiting for machine to come up
	I1104 12:07:24.326735   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:24.327197   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:24.327220   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:24.327148   87021 retry.go:31] will retry after 1.676202737s: waiting for machine to come up
	I1104 12:07:26.006035   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:26.006515   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:26.006538   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:26.006460   87021 retry.go:31] will retry after 1.8778328s: waiting for machine to come up
	I1104 12:07:27.886226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:27.886634   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:27.886656   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:27.886579   87021 retry.go:31] will retry after 2.886548821s: waiting for machine to come up
	I1104 12:07:30.776677   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:30.777080   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:30.777102   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:30.777039   87021 retry.go:31] will retry after 3.108966144s: waiting for machine to come up
	I1104 12:07:35.049920   86301 start.go:364] duration metric: took 3m14.656022924s to acquireMachinesLock for "default-k8s-diff-port-036892"
	I1104 12:07:35.050007   86301 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:35.050019   86301 fix.go:54] fixHost starting: 
	I1104 12:07:35.050381   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:35.050436   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:35.067928   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I1104 12:07:35.068445   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:35.068953   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:07:35.068976   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:35.069353   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:35.069560   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:35.069692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:07:35.071231   86301 fix.go:112] recreateIfNeeded on default-k8s-diff-port-036892: state=Stopped err=<nil>
	I1104 12:07:35.071252   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	W1104 12:07:35.071401   86301 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:35.073762   86301 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-036892" ...
	I1104 12:07:35.075114   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Start
	I1104 12:07:35.075311   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring networks are active...
	I1104 12:07:35.076105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network default is active
	I1104 12:07:35.076534   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network mk-default-k8s-diff-port-036892 is active
	I1104 12:07:35.076946   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Getting domain xml...
	I1104 12:07:35.077641   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Creating domain...
	I1104 12:07:33.887738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888147   85759 main.go:141] libmachine: (embed-certs-325116) Found IP for machine: 192.168.39.47
	I1104 12:07:33.888176   85759 main.go:141] libmachine: (embed-certs-325116) Reserving static IP address...
	I1104 12:07:33.888206   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has current primary IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888737   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.888769   85759 main.go:141] libmachine: (embed-certs-325116) DBG | skip adding static IP to network mk-embed-certs-325116 - found existing host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"}
	I1104 12:07:33.888783   85759 main.go:141] libmachine: (embed-certs-325116) Reserved static IP address: 192.168.39.47
	I1104 12:07:33.888795   85759 main.go:141] libmachine: (embed-certs-325116) Waiting for SSH to be available...
	I1104 12:07:33.888812   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Getting to WaitForSSH function...
	I1104 12:07:33.891130   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891493   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.891520   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891670   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH client type: external
	I1104 12:07:33.891693   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa (-rw-------)
	I1104 12:07:33.891732   85759 main.go:141] libmachine: (embed-certs-325116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:33.891748   85759 main.go:141] libmachine: (embed-certs-325116) DBG | About to run SSH command:
	I1104 12:07:33.891773   85759 main.go:141] libmachine: (embed-certs-325116) DBG | exit 0
	I1104 12:07:34.012989   85759 main.go:141] libmachine: (embed-certs-325116) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:34.013457   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetConfigRaw
	I1104 12:07:34.014162   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.016645   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017028   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.017062   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017347   85759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/config.json ...
	I1104 12:07:34.017577   85759 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:34.017596   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.017824   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.020134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020416   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.020449   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020570   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.020745   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.020905   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.021059   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.021313   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.021505   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.021515   85759 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:34.125266   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:34.125305   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125556   85759 buildroot.go:166] provisioning hostname "embed-certs-325116"
	I1104 12:07:34.125583   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125781   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.128180   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128486   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.128514   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128603   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.128758   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128890   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.129166   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.129371   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.129394   85759 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-325116 && echo "embed-certs-325116" | sudo tee /etc/hostname
	I1104 12:07:34.242027   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-325116
	
	I1104 12:07:34.242054   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.244671   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.244984   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.245019   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.245159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.245337   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245514   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245661   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.245810   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.245971   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.245986   85759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-325116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-325116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-325116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:34.357178   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:34.357204   85759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:34.357220   85759 buildroot.go:174] setting up certificates
	I1104 12:07:34.357241   85759 provision.go:84] configureAuth start
	I1104 12:07:34.357250   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.357533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.359993   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360308   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.360327   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.362459   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362750   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.362786   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362932   85759 provision.go:143] copyHostCerts
	I1104 12:07:34.362986   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:34.363022   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:34.363109   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:34.363231   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:34.363242   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:34.363282   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:34.363357   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:34.363368   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:34.363399   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:34.363503   85759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.embed-certs-325116 san=[127.0.0.1 192.168.39.47 embed-certs-325116 localhost minikube]
	I1104 12:07:34.453223   85759 provision.go:177] copyRemoteCerts
	I1104 12:07:34.453295   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:34.453317   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.455736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456022   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.456054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456230   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.456406   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.456539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.456631   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.539172   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:34.561889   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:07:34.585111   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:07:34.607449   85759 provision.go:87] duration metric: took 250.195255ms to configureAuth
	I1104 12:07:34.607495   85759 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:34.607809   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:34.607952   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.610672   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611009   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.611032   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611253   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.611444   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611600   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611739   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.611917   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.612086   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.612101   85759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:34.823086   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:34.823114   85759 machine.go:96] duration metric: took 805.522353ms to provisionDockerMachine
	I1104 12:07:34.823128   85759 start.go:293] postStartSetup for "embed-certs-325116" (driver="kvm2")
	I1104 12:07:34.823138   85759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:34.823174   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.823451   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:34.823489   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.826112   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826453   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.826482   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826581   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.826756   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.826886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.826998   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.907354   85759 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:34.911229   85759 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:34.911246   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:34.911316   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:34.911402   85759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:34.911516   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:34.920149   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:34.942468   85759 start.go:296] duration metric: took 119.32654ms for postStartSetup
	I1104 12:07:34.942517   85759 fix.go:56] duration metric: took 17.200448721s for fixHost
	I1104 12:07:34.942540   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.945295   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945659   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.945685   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945847   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.946006   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946173   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946311   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.946442   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.946583   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.946592   85759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:35.049767   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722055.017047529
	
	I1104 12:07:35.049790   85759 fix.go:216] guest clock: 1730722055.017047529
	I1104 12:07:35.049797   85759 fix.go:229] Guest: 2024-11-04 12:07:35.017047529 +0000 UTC Remote: 2024-11-04 12:07:34.942522008 +0000 UTC m=+283.781167350 (delta=74.525521ms)
	I1104 12:07:35.049829   85759 fix.go:200] guest clock delta is within tolerance: 74.525521ms
	I1104 12:07:35.049834   85759 start.go:83] releasing machines lock for "embed-certs-325116", held for 17.307794416s
	I1104 12:07:35.049859   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.050137   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:35.052845   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053238   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.053269   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054239   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054337   85759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:35.054383   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.054502   85759 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:35.054539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.057289   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057391   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057733   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057778   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057802   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057820   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057959   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.057996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.058110   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058296   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058316   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.058658   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.134602   85759 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:35.158961   85759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:35.303038   85759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:35.309611   85759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:35.309674   85759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:35.325082   85759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:35.325142   85759 start.go:495] detecting cgroup driver to use...
	I1104 12:07:35.325211   85759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:35.341673   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:35.355506   85759 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:35.355569   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:35.369017   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:35.382745   85759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:35.498985   85759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:35.648628   85759 docker.go:233] disabling docker service ...
	I1104 12:07:35.648702   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:35.666912   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:35.679786   85759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:35.799284   85759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:35.931842   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:35.945707   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:35.965183   85759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:35.965269   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.975446   85759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:35.975514   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.985968   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.996462   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.006840   85759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:36.017174   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.027013   85759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.044572   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.054046   85759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:36.063355   85759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:36.063399   85759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:36.075157   85759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:36.084713   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:36.205088   85759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:36.299330   85759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:36.299423   85759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:36.304194   85759 start.go:563] Will wait 60s for crictl version
	I1104 12:07:36.304248   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:07:36.308041   85759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:36.349114   85759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:36.349264   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.378677   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.406751   85759 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:36.335603   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting to get IP...
	I1104 12:07:36.336431   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.336921   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.337007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.336911   87142 retry.go:31] will retry after 289.750795ms: waiting for machine to come up
	I1104 12:07:36.628712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629301   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.629345   87142 retry.go:31] will retry after 356.596321ms: waiting for machine to come up
	I1104 12:07:36.988173   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988663   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988713   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.988626   87142 retry.go:31] will retry after 446.62367ms: waiting for machine to come up
	I1104 12:07:37.437529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438120   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438174   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.438023   87142 retry.go:31] will retry after 482.072639ms: waiting for machine to come up
	I1104 12:07:37.921514   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922025   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922056   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.921983   87142 retry.go:31] will retry after 645.10615ms: waiting for machine to come up
	I1104 12:07:38.569009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569524   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569566   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:38.569432   87142 retry.go:31] will retry after 841.352802ms: waiting for machine to come up
	I1104 12:07:39.412662   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413091   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413112   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:39.413047   87142 retry.go:31] will retry after 878.218722ms: waiting for machine to come up
	I1104 12:07:36.407939   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:36.411021   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411378   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:36.411408   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411599   85759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:36.415528   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:36.427484   85759 kubeadm.go:883] updating cluster {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:36.427616   85759 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:36.427684   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:36.460332   85759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:36.460406   85759 ssh_runner.go:195] Run: which lz4
	I1104 12:07:36.464187   85759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:36.468140   85759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:36.468177   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:37.703067   85759 crio.go:462] duration metric: took 1.238901186s to copy over tarball
	I1104 12:07:37.703136   85759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:39.803761   85759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100578378s)
	I1104 12:07:39.803795   85759 crio.go:469] duration metric: took 2.100697698s to extract the tarball
	I1104 12:07:39.803805   85759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:39.840536   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:39.883410   85759 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:39.883431   85759 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:39.883438   85759 kubeadm.go:934] updating node { 192.168.39.47 8443 v1.31.2 crio true true} ...
	I1104 12:07:39.883531   85759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-325116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:39.883608   85759 ssh_runner.go:195] Run: crio config
	I1104 12:07:39.928280   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:39.928303   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:39.928313   85759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:39.928333   85759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-325116 NodeName:embed-certs-325116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:39.928440   85759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-325116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:39.928495   85759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:39.938496   85759 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:39.938568   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:39.947809   85759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1104 12:07:39.963319   85759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:39.978789   85759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1104 12:07:39.994910   85759 ssh_runner.go:195] Run: grep 192.168.39.47	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:39.998355   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:40.010097   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:40.118679   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:40.134369   85759 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116 for IP: 192.168.39.47
	I1104 12:07:40.134391   85759 certs.go:194] generating shared ca certs ...
	I1104 12:07:40.134429   85759 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:40.134612   85759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:40.134666   85759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:40.134680   85759 certs.go:256] generating profile certs ...
	I1104 12:07:40.134782   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/client.key
	I1104 12:07:40.134880   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key.36f6fb66
	I1104 12:07:40.134929   85759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key
	I1104 12:07:40.135083   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:40.135124   85759 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:40.135140   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:40.135225   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:40.135281   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:40.135315   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:40.135380   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:40.136240   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:40.179608   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:40.227851   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:40.255791   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:40.281672   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1104 12:07:40.305960   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:07:40.332465   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:40.354950   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:07:40.377476   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:40.399291   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:40.420689   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:40.443610   85759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:40.459706   85759 ssh_runner.go:195] Run: openssl version
	I1104 12:07:40.465244   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:40.475375   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479676   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479748   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.485523   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:40.497163   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:40.509090   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513617   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513685   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.519372   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:40.530944   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:40.542569   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.546965   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.547019   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.552470   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:40.562456   85759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:40.566967   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:40.572778   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:40.578409   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:40.584134   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:40.589880   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:40.595604   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:40.601191   85759 kubeadm.go:392] StartCluster: {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:40.601329   85759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:40.601385   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.642970   85759 cri.go:89] found id: ""
	I1104 12:07:40.643034   85759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:40.653420   85759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:40.653446   85759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:40.653496   85759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:40.663023   85759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:40.664008   85759 kubeconfig.go:125] found "embed-certs-325116" server: "https://192.168.39.47:8443"
	I1104 12:07:40.665967   85759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:40.675296   85759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.47
	I1104 12:07:40.675324   85759 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:40.675336   85759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:40.675384   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.718457   85759 cri.go:89] found id: ""
	I1104 12:07:40.718543   85759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:40.733875   85759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:40.743811   85759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:40.743835   85759 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:40.743889   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:07:40.752987   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:40.753048   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:40.762296   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:07:40.771048   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:40.771112   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:40.780163   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.789500   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:40.789566   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.799200   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:07:40.808061   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:40.808121   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:40.817445   85759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:40.826803   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.934345   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.292591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293050   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293084   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:40.292988   87142 retry.go:31] will retry after 1.110341741s: waiting for machine to come up
	I1104 12:07:41.405407   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405858   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405885   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:41.405800   87142 retry.go:31] will retry after 1.311587036s: waiting for machine to come up
	I1104 12:07:42.719157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:42.719530   87142 retry.go:31] will retry after 1.999866716s: waiting for machine to come up
	I1104 12:07:44.721872   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722324   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722351   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:44.722278   87142 retry.go:31] will retry after 2.895241769s: waiting for machine to come up
	I1104 12:07:41.512710   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.729355   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.807064   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.888493   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:07:41.888593   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.389674   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.889373   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.389705   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.889548   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.924248   85759 api_server.go:72] duration metric: took 2.035753888s to wait for apiserver process to appear ...
	I1104 12:07:43.924277   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:07:43.924320   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:43.924831   85759 api_server.go:269] stopped: https://192.168.39.47:8443/healthz: Get "https://192.168.39.47:8443/healthz": dial tcp 192.168.39.47:8443: connect: connection refused
	I1104 12:07:44.424651   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.043002   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.043037   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.043054   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.104246   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.104276   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.424506   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.430506   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.430544   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:47.924409   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.937055   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.937083   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:48.424568   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:48.428850   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:07:48.436388   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:07:48.436411   85759 api_server.go:131] duration metric: took 4.512127349s to wait for apiserver health ...
	I1104 12:07:48.436420   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:48.436427   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:48.438220   85759 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:07:48.439495   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:07:48.449650   85759 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:07:48.467313   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:07:48.480777   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:07:48.480823   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:07:48.480834   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:07:48.480845   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:07:48.480859   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:07:48.480876   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:07:48.480893   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:07:48.480907   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:07:48.480916   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:07:48.480928   85759 system_pods.go:74] duration metric: took 13.592864ms to wait for pod list to return data ...
	I1104 12:07:48.480947   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:07:48.487234   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:07:48.487271   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:07:48.487284   85759 node_conditions.go:105] duration metric: took 6.331259ms to run NodePressure ...
	I1104 12:07:48.487313   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:48.756654   85759 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764840   85759 kubeadm.go:739] kubelet initialised
	I1104 12:07:48.764863   85759 kubeadm.go:740] duration metric: took 8.175857ms waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764871   85759 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:48.772653   85759 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.784158   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784198   85759 pod_ready.go:82] duration metric: took 11.515605ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.784211   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784220   85759 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.791264   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791297   85759 pod_ready.go:82] duration metric: took 7.066247ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.791310   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791326   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.798259   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798294   85759 pod_ready.go:82] duration metric: took 6.954559ms for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.798304   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798312   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.872019   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872058   85759 pod_ready.go:82] duration metric: took 73.723761ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.872069   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872075   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.271210   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271252   85759 pod_ready.go:82] duration metric: took 399.167509ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.271264   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271272   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.671430   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671453   85759 pod_ready.go:82] duration metric: took 400.174495ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.671469   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671475   85759 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:50.070546   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070576   85759 pod_ready.go:82] duration metric: took 399.092108ms for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:50.070587   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070596   85759 pod_ready.go:39] duration metric: took 1.305717298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:50.070615   85759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:07:50.082815   85759 ops.go:34] apiserver oom_adj: -16
	I1104 12:07:50.082839   85759 kubeadm.go:597] duration metric: took 9.429385589s to restartPrimaryControlPlane
	I1104 12:07:50.082850   85759 kubeadm.go:394] duration metric: took 9.481667011s to StartCluster
	I1104 12:07:50.082871   85759 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.082952   85759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:07:50.086014   85759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.086562   85759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:07:50.086628   85759 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:07:50.086740   85759 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-325116"
	I1104 12:07:50.086763   85759 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-325116"
	I1104 12:07:50.086765   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1104 12:07:50.086776   85759 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:07:50.086774   85759 addons.go:69] Setting default-storageclass=true in profile "embed-certs-325116"
	I1104 12:07:50.086803   85759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-325116"
	I1104 12:07:50.086787   85759 addons.go:69] Setting metrics-server=true in profile "embed-certs-325116"
	I1104 12:07:50.086812   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.086825   85759 addons.go:234] Setting addon metrics-server=true in "embed-certs-325116"
	W1104 12:07:50.086837   85759 addons.go:243] addon metrics-server should already be in state true
	I1104 12:07:50.086866   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.087120   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087148   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087160   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087178   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087247   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087286   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.088320   85759 out.go:177] * Verifying Kubernetes components...
	I1104 12:07:50.089814   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:50.102796   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I1104 12:07:50.102976   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1104 12:07:50.103076   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1104 12:07:50.103462   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103491   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103566   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103990   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104014   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104085   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104101   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104199   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104223   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104368   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104402   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104545   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.104559   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104949   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.104987   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.105081   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.105116   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.108134   85759 addons.go:234] Setting addon default-storageclass=true in "embed-certs-325116"
	W1104 12:07:50.108161   85759 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:07:50.108193   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.108597   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.108648   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.121556   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I1104 12:07:50.122038   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.122504   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.122527   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.122869   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.123107   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.125142   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.125294   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I1104 12:07:50.125613   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.125972   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.125988   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.126279   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.126399   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.127256   85759 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:07:50.127993   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1104 12:07:50.128235   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.128597   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.128843   85759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.128864   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:07:50.128883   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.129066   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.129088   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.129389   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.129882   85759 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:07:47.619516   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620045   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:47.620000   87142 retry.go:31] will retry after 3.554669963s: waiting for machine to come up
	I1104 12:07:50.130127   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.130187   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.131115   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:07:50.131134   85759 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:07:50.131154   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.131899   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132352   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.132375   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132664   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.132830   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.132986   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.133099   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.134698   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135217   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.135246   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.135629   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.135765   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.135908   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.146618   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1104 12:07:50.147639   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.148281   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.148307   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.148617   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.148860   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.150751   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.151010   85759 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.151028   85759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:07:50.151050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.153947   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154385   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.154418   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154560   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.154749   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.154886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.155028   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.278380   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:50.294682   85759 node_ready.go:35] waiting up to 6m0s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:50.355769   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:07:50.355790   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:07:50.375818   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.404741   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:07:50.404766   85759 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:07:50.466718   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.466748   85759 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:07:50.493662   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.503255   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.799735   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.799772   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800039   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800086   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.800094   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:50.800107   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.800159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800382   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800394   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.810586   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.810857   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.810876   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810893   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.484326   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484356   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484671   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484687   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484695   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484702   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484899   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484938   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484950   85759 addons.go:475] Verifying addon metrics-server=true in "embed-certs-325116"
	I1104 12:07:51.549507   85759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.046214827s)
	I1104 12:07:51.549559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549569   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.549886   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.549906   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.549909   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.549916   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549923   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.550143   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.550164   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.552039   85759 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1104 12:07:52.573915   86402 start.go:364] duration metric: took 3m30.781955626s to acquireMachinesLock for "old-k8s-version-589257"
	I1104 12:07:52.573984   86402 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:52.573996   86402 fix.go:54] fixHost starting: 
	I1104 12:07:52.574443   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:52.574500   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:52.594310   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1104 12:07:52.594822   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:52.595317   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:07:52.595347   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:52.595727   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:52.595924   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:07:52.596093   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 12:07:52.597578   86402 fix.go:112] recreateIfNeeded on old-k8s-version-589257: state=Stopped err=<nil>
	I1104 12:07:52.597615   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	W1104 12:07:52.597752   86402 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:52.599659   86402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	I1104 12:07:51.176791   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177282   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Found IP for machine: 192.168.72.130
	I1104 12:07:51.177313   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has current primary IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177325   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserving static IP address...
	I1104 12:07:51.177817   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.177863   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | skip adding static IP to network mk-default-k8s-diff-port-036892 - found existing host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"}
	I1104 12:07:51.177876   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserved static IP address: 192.168.72.130
	I1104 12:07:51.177890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for SSH to be available...
	I1104 12:07:51.177897   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Getting to WaitForSSH function...
	I1104 12:07:51.180038   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180440   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.180466   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH client type: external
	I1104 12:07:51.180611   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa (-rw-------)
	I1104 12:07:51.180747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:51.180777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | About to run SSH command:
	I1104 12:07:51.180795   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | exit 0
	I1104 12:07:51.309075   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:51.309445   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetConfigRaw
	I1104 12:07:51.310162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.312651   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313061   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.313090   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313460   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:07:51.313720   86301 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:51.313747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:51.313926   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.316269   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316782   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.316829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316937   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.317162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317335   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317598   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.317777   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.317981   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.317994   86301 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:51.441588   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:51.441626   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.441876   86301 buildroot.go:166] provisioning hostname "default-k8s-diff-port-036892"
	I1104 12:07:51.441902   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.442097   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.445155   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445637   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.445670   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445820   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.446013   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446186   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446352   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.446539   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.446753   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.446773   86301 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-036892 && echo "default-k8s-diff-port-036892" | sudo tee /etc/hostname
	I1104 12:07:51.578973   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-036892
	
	I1104 12:07:51.579004   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.581759   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.582135   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582299   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.582455   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582582   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.582834   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.583006   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.583022   86301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-036892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-036892/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-036892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:51.702410   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:51.702441   86301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:51.702471   86301 buildroot.go:174] setting up certificates
	I1104 12:07:51.702483   86301 provision.go:84] configureAuth start
	I1104 12:07:51.702492   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.702789   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.705067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.705449   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705567   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.707341   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707627   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.707658   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707748   86301 provision.go:143] copyHostCerts
	I1104 12:07:51.707805   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:51.707818   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:51.707870   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:51.707969   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:51.707978   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:51.707999   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:51.708061   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:51.708067   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:51.708085   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:51.708132   86301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-036892 san=[127.0.0.1 192.168.72.130 default-k8s-diff-port-036892 localhost minikube]
	I1104 12:07:51.935898   86301 provision.go:177] copyRemoteCerts
	I1104 12:07:51.935973   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:51.936008   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.938722   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939100   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.939134   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939266   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.939462   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.939609   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.939786   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.027147   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:52.054828   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1104 12:07:52.078755   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 12:07:52.101312   86301 provision.go:87] duration metric: took 398.817409ms to configureAuth
	I1104 12:07:52.101338   86301 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:52.101523   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:52.101608   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.104187   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104549   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.104581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104700   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.104890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105028   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.105319   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.105490   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.105514   86301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:52.331840   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:52.331865   86301 machine.go:96] duration metric: took 1.018128337s to provisionDockerMachine
	I1104 12:07:52.331875   86301 start.go:293] postStartSetup for "default-k8s-diff-port-036892" (driver="kvm2")
	I1104 12:07:52.331884   86301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:52.331898   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.332229   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:52.332261   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.334710   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335005   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.335036   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335176   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.335342   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.335447   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.335547   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.419392   86301 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:52.423306   86301 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:52.423335   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:52.423396   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:52.423483   86301 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:52.423575   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:52.432625   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:52.456616   86301 start.go:296] duration metric: took 124.726284ms for postStartSetup
	I1104 12:07:52.456664   86301 fix.go:56] duration metric: took 17.406645021s for fixHost
	I1104 12:07:52.456689   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.459189   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.459573   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.459967   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460093   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460218   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.460349   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.460521   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.460533   86301 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:52.573760   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722072.546172571
	
	I1104 12:07:52.573781   86301 fix.go:216] guest clock: 1730722072.546172571
	I1104 12:07:52.573787   86301 fix.go:229] Guest: 2024-11-04 12:07:52.546172571 +0000 UTC Remote: 2024-11-04 12:07:52.45666981 +0000 UTC m=+212.207079326 (delta=89.502761ms)
	I1104 12:07:52.573827   86301 fix.go:200] guest clock delta is within tolerance: 89.502761ms
	I1104 12:07:52.573832   86301 start.go:83] releasing machines lock for "default-k8s-diff-port-036892", held for 17.523849814s
	I1104 12:07:52.573856   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.574109   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:52.576773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577125   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.577151   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577304   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577776   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577950   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.578043   86301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:52.578079   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.578133   86301 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:52.578159   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.580773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.580909   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581154   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581179   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581196   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581286   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581488   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581660   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581677   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581770   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.581823   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581946   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.683801   86301 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:52.689498   86301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:52.830236   86301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:52.835868   86301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:52.835951   86301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:52.851557   86301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:52.851586   86301 start.go:495] detecting cgroup driver to use...
	I1104 12:07:52.851656   86301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:52.868648   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:52.883434   86301 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:52.883507   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:52.898233   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:52.912615   86301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:53.036342   86301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:53.183326   86301 docker.go:233] disabling docker service ...
	I1104 12:07:53.183407   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:53.197465   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:53.210118   86301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:53.354857   86301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:53.490760   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:53.506829   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:53.526401   86301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:53.526464   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.537264   86301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:53.537339   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.547882   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.558039   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.569347   86301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:53.579931   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.589594   86301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.606753   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.623316   86301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:53.638183   86301 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:53.638245   86301 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:53.656452   86301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:53.666343   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:53.784882   86301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:53.879727   86301 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:53.879790   86301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:53.884438   86301 start.go:563] Will wait 60s for crictl version
	I1104 12:07:53.884494   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:07:53.887785   86301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:53.926395   86301 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:53.926496   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.963049   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.996513   86301 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:53.997774   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:54.000829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001214   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:54.001300   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001469   86301 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:54.005521   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:54.021723   86301 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:54.021915   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:54.021979   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:54.072114   86301 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:54.072178   86301 ssh_runner.go:195] Run: which lz4
	I1104 12:07:54.077106   86301 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:54.081979   86301 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:54.082018   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:51.553141   85759 addons.go:510] duration metric: took 1.466523338s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1104 12:07:52.298494   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:54.299895   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:52.600997   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .Start
	I1104 12:07:52.601180   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 12:07:52.602131   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 12:07:52.602560   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 12:07:52.603030   86402 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 12:07:52.603859   86402 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 12:07:53.855214   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 12:07:53.856063   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:53.856539   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:53.856594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:53.856513   87367 retry.go:31] will retry after 268.725451ms: waiting for machine to come up
	I1104 12:07:54.127094   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.127584   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.127612   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.127560   87367 retry.go:31] will retry after 239.665225ms: waiting for machine to come up
	I1104 12:07:54.369139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.369777   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.369798   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.369710   87367 retry.go:31] will retry after 386.228261ms: waiting for machine to come up
	I1104 12:07:54.757191   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.757637   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.757665   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.757591   87367 retry.go:31] will retry after 571.244573ms: waiting for machine to come up
	I1104 12:07:55.330439   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.331187   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.331216   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.331144   87367 retry.go:31] will retry after 539.328185ms: waiting for machine to come up
	I1104 12:07:55.871869   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.872373   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.872403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.872335   87367 retry.go:31] will retry after 879.285089ms: waiting for machine to come up
	I1104 12:07:55.376802   86301 crio.go:462] duration metric: took 1.299729399s to copy over tarball
	I1104 12:07:55.376881   86301 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:57.716230   86301 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.339307666s)
	I1104 12:07:57.716268   86301 crio.go:469] duration metric: took 2.339436958s to extract the tarball
	I1104 12:07:57.716277   86301 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:57.753216   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:57.799042   86301 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:57.799145   86301 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:57.799161   86301 kubeadm.go:934] updating node { 192.168.72.130 8444 v1.31.2 crio true true} ...
	I1104 12:07:57.799273   86301 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-036892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:57.799347   86301 ssh_runner.go:195] Run: crio config
	I1104 12:07:57.851871   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:07:57.851892   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:57.851900   86301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:57.851919   86301 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-036892 NodeName:default-k8s-diff-port-036892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:57.852056   86301 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-036892"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:57.852116   86301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:57.862269   86301 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:57.862343   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:57.872253   86301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1104 12:07:57.889328   86301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:57.908250   86301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1104 12:07:57.926081   86301 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:57.929870   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:57.943872   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:58.070141   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:58.089370   86301 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892 for IP: 192.168.72.130
	I1104 12:07:58.089397   86301 certs.go:194] generating shared ca certs ...
	I1104 12:07:58.089423   86301 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:58.089596   86301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:58.089647   86301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:58.089659   86301 certs.go:256] generating profile certs ...
	I1104 12:07:58.089765   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/client.key
	I1104 12:07:58.089831   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key.713851b2
	I1104 12:07:58.089889   86301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key
	I1104 12:07:58.090054   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:58.090100   86301 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:58.090116   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:58.090149   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:58.090184   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:58.090219   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:58.090279   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:58.090977   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:58.125282   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:58.168289   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:58.210967   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:58.253986   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 12:07:58.280769   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:07:58.308406   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:58.334250   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:07:58.363224   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:58.391795   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:58.420782   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:58.446611   86301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:58.465895   86301 ssh_runner.go:195] Run: openssl version
	I1104 12:07:58.471614   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:58.482139   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486533   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486591   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.492217   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:58.502724   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:58.514146   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518243   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518303   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.523579   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:58.533993   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:58.544137   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548190   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548250   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.553714   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:58.564221   86301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:58.568445   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:58.574072   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:58.579551   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:58.584909   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:58.590102   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:58.595227   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:58.600338   86301 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:58.600445   86301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:58.600492   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.634282   86301 cri.go:89] found id: ""
	I1104 12:07:58.634352   86301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:58.644578   86301 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:58.644597   86301 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:58.644635   86301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:58.654412   86301 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:58.655638   86301 kubeconfig.go:125] found "default-k8s-diff-port-036892" server: "https://192.168.72.130:8444"
	I1104 12:07:58.658639   86301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:58.667867   86301 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I1104 12:07:58.667900   86301 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:58.667913   86301 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:58.667971   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.702765   86301 cri.go:89] found id: ""
	I1104 12:07:58.702844   86301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:58.718368   86301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:58.727671   86301 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:58.727690   86301 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:58.727750   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1104 12:07:58.736350   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:58.736424   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:58.745441   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1104 12:07:58.753945   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:58.754011   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:58.763134   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.771588   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:58.771651   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.780623   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1104 12:07:58.788962   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:58.789036   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:58.798472   86301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:58.808209   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:58.919153   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.679355   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.889628   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.958981   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:00.048061   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:00.048158   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:56.798747   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:57.799286   85759 node_ready.go:49] node "embed-certs-325116" has status "Ready":"True"
	I1104 12:07:57.799308   85759 node_ready.go:38] duration metric: took 7.504592975s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:57.799319   85759 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:57.805595   85759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812394   85759 pod_ready.go:93] pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.812421   85759 pod_ready.go:82] duration metric: took 6.791823ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812434   85759 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818338   85759 pod_ready.go:93] pod "etcd-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.818359   85759 pod_ready.go:82] duration metric: took 5.916571ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818400   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:00.015222   85759 pod_ready.go:103] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"False"
	I1104 12:07:56.752983   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:56.753577   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:56.753613   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:56.753542   87367 retry.go:31] will retry after 1.081359862s: waiting for machine to come up
	I1104 12:07:57.836518   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:57.836963   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:57.836990   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:57.836914   87367 retry.go:31] will retry after 1.149571097s: waiting for machine to come up
	I1104 12:07:58.987694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:58.988125   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:58.988152   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:58.988077   87367 retry.go:31] will retry after 1.247311806s: waiting for machine to come up
	I1104 12:08:00.237634   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:00.238147   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:00.238217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:00.238109   87367 retry.go:31] will retry after 2.058125339s: waiting for machine to come up
	I1104 12:08:00.549003   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.048325   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.548502   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.563976   86301 api_server.go:72] duration metric: took 1.515915725s to wait for apiserver process to appear ...
	I1104 12:08:01.564003   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:01.564021   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.008662   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.008689   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.008701   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.033053   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.033085   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.064261   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.084034   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.084062   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:04.564564   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.570062   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.570090   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.064688   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.069572   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:05.069600   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.564628   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.570537   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:08:05.577335   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:05.577360   86301 api_server.go:131] duration metric: took 4.01335048s to wait for apiserver health ...
	I1104 12:08:05.577371   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:08:05.577379   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:05.578990   86301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:01.824677   85759 pod_ready.go:93] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.824703   85759 pod_ready.go:82] duration metric: took 4.006292816s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.824717   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833386   85759 pod_ready.go:93] pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.833415   85759 pod_ready.go:82] duration metric: took 8.688522ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833428   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839346   85759 pod_ready.go:93] pod "kube-proxy-phzgx" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.839370   85759 pod_ready.go:82] duration metric: took 5.933971ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839379   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844449   85759 pod_ready.go:93] pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.844476   85759 pod_ready.go:82] duration metric: took 5.08969ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844490   85759 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:03.852871   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:02.298631   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:02.299046   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:02.299079   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:02.298978   87367 retry.go:31] will retry after 2.664667046s: waiting for machine to come up
	I1104 12:08:04.964700   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:04.965185   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:04.965209   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:04.965135   87367 retry.go:31] will retry after 2.716802395s: waiting for machine to come up
	I1104 12:08:05.580188   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:05.591930   86301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:05.609969   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:05.621524   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:05.621559   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:05.621579   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:05.621590   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:05.621599   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:05.621609   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:05.621623   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:05.621637   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:05.621646   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:05.621656   86301 system_pods.go:74] duration metric: took 11.668493ms to wait for pod list to return data ...
	I1104 12:08:05.621669   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:05.626555   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:05.626583   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:05.626600   86301 node_conditions.go:105] duration metric: took 4.924748ms to run NodePressure ...
	I1104 12:08:05.626620   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:05.899159   86301 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905004   86301 kubeadm.go:739] kubelet initialised
	I1104 12:08:05.905027   86301 kubeadm.go:740] duration metric: took 5.831926ms waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905035   86301 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:05.910301   86301 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.917517   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917552   86301 pod_ready.go:82] duration metric: took 7.223252ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.917564   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917577   86301 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.924077   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924108   86301 pod_ready.go:82] duration metric: took 6.519268ms for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.924123   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924133   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.929584   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929611   86301 pod_ready.go:82] duration metric: took 5.464108ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.929625   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929640   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.013629   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013655   86301 pod_ready.go:82] duration metric: took 84.003349ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.013666   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013674   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.413337   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413362   86301 pod_ready.go:82] duration metric: took 399.676932ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.413372   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413379   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.813910   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813948   86301 pod_ready.go:82] duration metric: took 400.558541ms for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.813962   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813971   86301 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:07.213603   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213632   86301 pod_ready.go:82] duration metric: took 399.645898ms for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:07.213642   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213650   86301 pod_ready.go:39] duration metric: took 1.308606058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:07.213664   86301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:07.224946   86301 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:07.224972   86301 kubeadm.go:597] duration metric: took 8.580368331s to restartPrimaryControlPlane
	I1104 12:08:07.224984   86301 kubeadm.go:394] duration metric: took 8.624649305s to StartCluster
	I1104 12:08:07.225005   86301 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.225093   86301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:07.226601   86301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.226848   86301 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:07.226980   86301 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:07.227075   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:07.227096   86301 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227115   86301 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:07.227110   86301 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-036892"
	W1104 12:08:07.227128   86301 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:07.227145   86301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-036892"
	I1104 12:08:07.227161   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227082   86301 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227275   86301 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.227291   86301 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:07.227316   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227494   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227529   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227592   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227620   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227634   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227655   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.228583   86301 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:07.229927   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:07.242580   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I1104 12:08:07.243096   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.243659   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.243678   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.243954   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I1104 12:08:07.244058   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.244513   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.244634   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.244679   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245015   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.245035   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.245437   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.245905   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.245942   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245963   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I1104 12:08:07.246281   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.246725   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.246748   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.247084   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.247294   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.250833   86301 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.250857   86301 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:07.250884   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.251243   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.251285   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.261670   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1104 12:08:07.261736   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I1104 12:08:07.262154   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262283   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262803   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262821   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.262916   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262927   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.263218   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263282   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263411   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.263457   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.265067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.265574   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.267307   86301 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:07.267336   86301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:07.268853   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:07.268874   86301 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:07.268895   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.268976   86301 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.268994   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:07.269011   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.271584   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I1104 12:08:07.272047   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.272347   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272377   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272688   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.272707   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.272933   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.272959   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272990   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.273007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.273065   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.273149   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273564   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.273597   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.273765   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273767   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273925   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273966   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274049   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274098   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.274179   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.288474   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I1104 12:08:07.288955   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.289555   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.289580   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.289915   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.290128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.291744   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.291944   86301 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.291958   86301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:07.291972   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.294477   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.294793   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.294824   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.295009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.295178   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.295326   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.295444   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.430295   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:07.461396   86301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:07.523117   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.542339   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:07.542361   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:07.566207   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:07.566232   86301 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:07.580871   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.596309   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:07.596338   86301 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:07.626662   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:08.553268   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553295   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553315   86301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030165078s)
	I1104 12:08:08.553352   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553373   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553656   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553673   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553683   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553739   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553759   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553767   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553780   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553925   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553942   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.554106   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.554138   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.554155   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.559615   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.559635   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.559944   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.559961   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.563833   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.563848   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564636   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564653   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564666   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.564671   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564894   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564906   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564912   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564940   86301 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:08.566838   86301 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:08.568165   86301 addons.go:510] duration metric: took 1.341200959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:09.465405   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.350759   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:08.850563   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:10.851315   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:07.683582   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:07.684143   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:07.684172   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:07.684093   87367 retry.go:31] will retry after 2.880856513s: waiting for machine to come up
	I1104 12:08:10.566197   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.566657   86402 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 12:08:10.566675   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 12:08:10.566687   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.567139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.567166   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 12:08:10.567186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | skip adding static IP to network mk-old-k8s-version-589257 - found existing host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"}
	I1104 12:08:10.567199   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 12:08:10.567213   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 12:08:10.569500   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569816   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.569846   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 12:08:10.570004   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 12:08:10.570025   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:10.570033   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 12:08:10.570041   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 12:08:10.697114   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:10.697552   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 12:08:10.698196   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:10.700982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701369   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.701403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701649   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:08:10.701875   86402 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:10.701898   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:10.702099   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.704605   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.704977   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.705006   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.705151   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.705342   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705486   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705602   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.705703   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.705907   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.705918   86402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:10.813494   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:10.813544   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.813816   86402 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 12:08:10.813847   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.814034   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.816782   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.817245   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817394   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.817589   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817760   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817882   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.818027   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.818227   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.818245   86402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 12:08:10.940779   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 12:08:10.940803   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.943694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944062   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.944090   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944263   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.944452   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944627   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944767   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.944910   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.945093   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.945110   86402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:11.061924   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:11.061966   86402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:11.062007   86402 buildroot.go:174] setting up certificates
	I1104 12:08:11.062021   86402 provision.go:84] configureAuth start
	I1104 12:08:11.062033   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:11.062293   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.065165   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065559   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.065594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065834   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.068257   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068620   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.068646   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068787   86402 provision.go:143] copyHostCerts
	I1104 12:08:11.068842   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:11.068854   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:11.068904   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:11.068993   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:11.069000   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:11.069019   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:11.069072   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:11.069079   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:11.069097   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:11.069191   86402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 12:08:11.271880   86402 provision.go:177] copyRemoteCerts
	I1104 12:08:11.271946   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:11.271988   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.275023   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275396   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.275428   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275701   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.275905   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.276048   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.276182   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.362968   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:11.388401   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 12:08:11.417180   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:11.439810   86402 provision.go:87] duration metric: took 377.778325ms to configureAuth
	I1104 12:08:11.439841   86402 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:11.440043   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:08:11.440110   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.442476   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.442783   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.442818   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.443005   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.443204   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443329   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.443665   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.443822   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.443837   86402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:11.662212   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:11.662241   86402 machine.go:96] duration metric: took 960.351823ms to provisionDockerMachine
	I1104 12:08:11.662256   86402 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 12:08:11.662269   86402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:11.662289   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.662613   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:11.662642   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.665028   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665391   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.665420   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665598   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.665776   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.665942   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.666064   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.889727   85500 start.go:364] duration metric: took 49.147423989s to acquireMachinesLock for "no-preload-908370"
	I1104 12:08:11.889796   85500 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:08:11.889806   85500 fix.go:54] fixHost starting: 
	I1104 12:08:11.890201   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:11.890229   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:11.906978   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I1104 12:08:11.907524   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:11.907916   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:11.907939   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:11.908319   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:11.908518   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:11.908672   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:11.910182   85500 fix.go:112] recreateIfNeeded on no-preload-908370: state=Stopped err=<nil>
	I1104 12:08:11.910224   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	W1104 12:08:11.910353   85500 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:08:11.912457   85500 out.go:177] * Restarting existing kvm2 VM for "no-preload-908370" ...
	I1104 12:08:11.747199   86402 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:11.751253   86402 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:11.751279   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:11.751356   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:11.751465   86402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:11.751591   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:11.760409   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:11.781890   86402 start.go:296] duration metric: took 119.620604ms for postStartSetup
	I1104 12:08:11.781934   86402 fix.go:56] duration metric: took 19.207938878s for fixHost
	I1104 12:08:11.781960   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.784767   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785058   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.785084   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785300   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.785500   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785644   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785750   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.785877   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.786047   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.786059   86402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:11.889540   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722091.863405264
	
	I1104 12:08:11.889568   86402 fix.go:216] guest clock: 1730722091.863405264
	I1104 12:08:11.889578   86402 fix.go:229] Guest: 2024-11-04 12:08:11.863405264 +0000 UTC Remote: 2024-11-04 12:08:11.781939603 +0000 UTC m=+230.132769870 (delta=81.465661ms)
	I1104 12:08:11.889631   86402 fix.go:200] guest clock delta is within tolerance: 81.465661ms
	I1104 12:08:11.889641   86402 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 19.315682928s
	I1104 12:08:11.889677   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.889975   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.892654   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.892982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.893012   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.893212   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893706   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893888   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893989   86402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:11.894031   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.894074   86402 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:11.894094   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.896812   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897020   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897192   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897454   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897478   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897631   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897646   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897778   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897911   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.897989   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.898083   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.898120   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.998704   86402 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:12.004820   86402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:12.148742   86402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:12.155015   86402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:12.155089   86402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:12.171054   86402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:12.171085   86402 start.go:495] detecting cgroup driver to use...
	I1104 12:08:12.171154   86402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:12.189977   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:12.204622   86402 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:12.204679   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:12.218808   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:12.232276   86402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:12.341220   86402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:12.512813   86402 docker.go:233] disabling docker service ...
	I1104 12:08:12.512893   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:12.526784   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:12.539774   86402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:12.666162   86402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:12.788317   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:12.802703   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:12.820915   86402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 12:08:12.820985   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.831311   86402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:12.831400   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.841625   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.852548   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.864683   86402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:12.876794   86402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:12.886878   86402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:12.886943   86402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:12.902476   86402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:12.914565   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:13.044125   86402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:13.149816   86402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:13.149893   86402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:13.154639   86402 start.go:563] Will wait 60s for crictl version
	I1104 12:08:13.154706   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:13.158788   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:13.200038   86402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:13.200117   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.233501   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.264558   86402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 12:08:11.913730   85500 main.go:141] libmachine: (no-preload-908370) Calling .Start
	I1104 12:08:11.913915   85500 main.go:141] libmachine: (no-preload-908370) Ensuring networks are active...
	I1104 12:08:11.914653   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network default is active
	I1104 12:08:11.915111   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network mk-no-preload-908370 is active
	I1104 12:08:11.915575   85500 main.go:141] libmachine: (no-preload-908370) Getting domain xml...
	I1104 12:08:11.916375   85500 main.go:141] libmachine: (no-preload-908370) Creating domain...
	I1104 12:08:13.289793   85500 main.go:141] libmachine: (no-preload-908370) Waiting to get IP...
	I1104 12:08:13.290880   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.291498   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.291631   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.291463   87562 retry.go:31] will retry after 277.090671ms: waiting for machine to come up
	I1104 12:08:13.570141   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.570726   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.570749   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.570623   87562 retry.go:31] will retry after 259.985785ms: waiting for machine to come up
	I1104 12:08:13.832172   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.832855   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.832898   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.832809   87562 retry.go:31] will retry after 473.426945ms: waiting for machine to come up
	I1104 12:08:14.308725   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.309273   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.309302   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.309249   87562 retry.go:31] will retry after 417.466134ms: waiting for machine to come up
	I1104 12:08:14.727927   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.728388   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.728413   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.728366   87562 retry.go:31] will retry after 734.894622ms: waiting for machine to come up
	I1104 12:08:11.465894   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:13.966921   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:14.465523   86301 node_ready.go:49] node "default-k8s-diff-port-036892" has status "Ready":"True"
	I1104 12:08:14.465545   86301 node_ready.go:38] duration metric: took 7.004111382s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:14.465554   86301 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:14.473334   86301 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482486   86301 pod_ready.go:93] pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:14.482508   86301 pod_ready.go:82] duration metric: took 9.145998ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482518   86301 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:13.351753   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:15.851818   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:13.266087   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:13.269660   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270200   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:13.270233   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270520   86402 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:13.274751   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:13.290348   86402 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:13.290483   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:08:13.290547   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:13.340338   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:13.340426   86402 ssh_runner.go:195] Run: which lz4
	I1104 12:08:13.345147   86402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:08:13.349792   86402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:08:13.349872   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 12:08:14.842720   86402 crio.go:462] duration metric: took 1.497615031s to copy over tarball
	I1104 12:08:14.842791   86402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:08:15.464914   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:15.465510   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:15.465541   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:15.465478   87562 retry.go:31] will retry after 578.01955ms: waiting for machine to come up
	I1104 12:08:16.044861   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:16.045354   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:16.045380   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:16.045313   87562 retry.go:31] will retry after 1.136035438s: waiting for machine to come up
	I1104 12:08:17.182829   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:17.183255   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:17.183282   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:17.183233   87562 retry.go:31] will retry after 1.070971462s: waiting for machine to come up
	I1104 12:08:18.255532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:18.256051   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:18.256078   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:18.256007   87562 retry.go:31] will retry after 1.542250267s: waiting for machine to come up
	I1104 12:08:19.800851   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:19.801298   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:19.801324   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:19.801276   87562 retry.go:31] will retry after 2.127250885s: waiting for machine to come up
	I1104 12:08:16.489394   86301 pod_ready.go:103] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:16.994480   86301 pod_ready.go:93] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:16.994502   86301 pod_ready.go:82] duration metric: took 2.511977586s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:16.994512   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502472   86301 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.502499   86301 pod_ready.go:82] duration metric: took 507.979218ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502513   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507763   86301 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.507785   86301 pod_ready.go:82] duration metric: took 5.264185ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507795   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514017   86301 pod_ready.go:93] pod "kube-proxy-j2srm" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.514045   86301 pod_ready.go:82] duration metric: took 6.241799ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514058   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:19.683083   86301 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.049735   86301 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:20.049759   86301 pod_ready.go:82] duration metric: took 2.535691306s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:20.049772   86301 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:18.749494   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.853448   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:17.837381   86402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994557811s)
	I1104 12:08:17.837410   86402 crio.go:469] duration metric: took 2.994665886s to extract the tarball
	I1104 12:08:17.837420   86402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:08:17.882418   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:17.917035   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:17.917064   86402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:17.917195   86402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.917169   86402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.917164   86402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.917150   86402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.917283   86402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.917254   86402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.918943   86402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 12:08:17.919014   86402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.919025   86402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.070119   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.076604   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.078712   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.083777   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.087827   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.092838   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.110359   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 12:08:18.165523   86402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 12:08:18.165569   86402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.165617   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.213723   86402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 12:08:18.213784   86402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.213833   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.252171   86402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 12:08:18.252221   86402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.252270   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256482   86402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 12:08:18.256522   86402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.256567   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256606   86402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 12:08:18.256564   86402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 12:08:18.256631   86402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.256632   86402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.256632   86402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 12:08:18.256690   86402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 12:08:18.256657   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256703   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.256691   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.256738   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256658   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.264837   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.265836   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.349896   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.349935   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.350014   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.350077   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.368533   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.371302   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.371393   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.496042   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.496121   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.509196   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.509339   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.509247   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.509348   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.513943   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.645867   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 12:08:18.649173   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.649276   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.656159   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.656193   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 12:08:18.660309   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 12:08:18.660384   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 12:08:18.719995   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 12:08:18.720033   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 12:08:18.728304   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 12:08:18.867880   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:19.009342   86402 cache_images.go:92] duration metric: took 1.092257593s to LoadCachedImages
	W1104 12:08:19.009448   86402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1104 12:08:19.009469   86402 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 12:08:19.009590   86402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:19.009671   86402 ssh_runner.go:195] Run: crio config
	I1104 12:08:19.054831   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:08:19.054850   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:19.054863   86402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:19.054880   86402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 12:08:19.055049   86402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:19.055125   86402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 12:08:19.065804   86402 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:19.065888   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:19.075491   86402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 12:08:19.092371   86402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:19.108896   86402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 12:08:19.127622   86402 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:19.131597   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:19.145142   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:19.284780   86402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:19.303843   86402 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 12:08:19.303872   86402 certs.go:194] generating shared ca certs ...
	I1104 12:08:19.303894   86402 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.304084   86402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:19.304148   86402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:19.304161   86402 certs.go:256] generating profile certs ...
	I1104 12:08:19.304280   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 12:08:19.304347   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 12:08:19.304401   86402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 12:08:19.304549   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:19.304590   86402 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:19.304608   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:19.304659   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:19.304702   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:19.304729   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:19.304794   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:19.305479   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:19.341333   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:19.375179   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:19.410128   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:19.452565   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 12:08:19.493404   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:08:19.521178   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:19.550524   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:08:19.574903   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:19.599308   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:19.627107   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:19.657121   86402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:19.679087   86402 ssh_runner.go:195] Run: openssl version
	I1104 12:08:19.687115   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:19.702537   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707340   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707408   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.714955   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:19.727883   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:19.739690   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744600   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744656   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.750324   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:19.760988   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:19.772634   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777504   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777580   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.783660   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:19.795483   86402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:19.800327   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:19.806346   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:19.813920   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:19.820358   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:19.826359   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:19.832467   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:19.838902   86402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:19.839018   86402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:19.839075   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.880407   86402 cri.go:89] found id: ""
	I1104 12:08:19.880486   86402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:19.891135   86402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:19.891156   86402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:19.891219   86402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:19.901437   86402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:19.902325   86402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:19.902941   86402 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-589257" cluster setting kubeconfig missing "old-k8s-version-589257" context setting]
	I1104 12:08:19.903879   86402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.937877   86402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:19.948669   86402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.180
	I1104 12:08:19.948701   86402 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:19.948711   86402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:19.948752   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.988249   86402 cri.go:89] found id: ""
	I1104 12:08:19.988344   86402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:20.006949   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:20.020677   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:20.020700   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:20.020747   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:20.031509   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:20.031566   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:20.042229   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:20.054695   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:20.054810   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:20.067410   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.078639   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:20.078711   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.091357   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:20.100986   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:20.101071   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:20.110345   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:20.119778   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:20.281637   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.006838   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.234671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.335720   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.437522   86402 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:21.437615   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:21.929963   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:21.930522   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:21.930552   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:21.930461   87562 retry.go:31] will retry after 2.171964123s: waiting for machine to come up
	I1104 12:08:24.103844   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:24.104303   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:24.104326   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:24.104257   87562 retry.go:31] will retry after 2.838813818s: waiting for machine to come up
	I1104 12:08:22.056858   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:24.057127   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:23.351405   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:25.850834   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:21.938086   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.438198   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.938624   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.438021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.938119   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.438470   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.937687   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.438045   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.937696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.438585   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.944977   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:26.945367   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:26.945395   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:26.945349   87562 retry.go:31] will retry after 2.799785534s: waiting for machine to come up
	I1104 12:08:29.746349   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746747   85500 main.go:141] libmachine: (no-preload-908370) Found IP for machine: 192.168.61.91
	I1104 12:08:29.746774   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has current primary IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746779   85500 main.go:141] libmachine: (no-preload-908370) Reserving static IP address...
	I1104 12:08:29.747195   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.747218   85500 main.go:141] libmachine: (no-preload-908370) Reserved static IP address: 192.168.61.91
	I1104 12:08:29.747234   85500 main.go:141] libmachine: (no-preload-908370) DBG | skip adding static IP to network mk-no-preload-908370 - found existing host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"}
	I1104 12:08:29.747248   85500 main.go:141] libmachine: (no-preload-908370) DBG | Getting to WaitForSSH function...
	I1104 12:08:29.747258   85500 main.go:141] libmachine: (no-preload-908370) Waiting for SSH to be available...
	I1104 12:08:29.749405   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749694   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.749728   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749887   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH client type: external
	I1104 12:08:29.749908   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa (-rw-------)
	I1104 12:08:29.749933   85500 main.go:141] libmachine: (no-preload-908370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:29.749951   85500 main.go:141] libmachine: (no-preload-908370) DBG | About to run SSH command:
	I1104 12:08:29.749966   85500 main.go:141] libmachine: (no-preload-908370) DBG | exit 0
	I1104 12:08:29.873121   85500 main.go:141] libmachine: (no-preload-908370) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:29.873472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetConfigRaw
	I1104 12:08:29.874081   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:29.876737   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877127   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.877155   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877473   85500 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/config.json ...
	I1104 12:08:29.877717   85500 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:29.877740   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:29.877936   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.880272   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880522   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.880543   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.880883   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881048   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.881338   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.881511   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.881524   85500 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:29.989431   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:29.989460   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989725   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:08:29.989757   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989974   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.992679   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993028   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.993057   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993222   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.993425   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993553   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993683   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.993817   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.994000   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.994016   85500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-908370 && echo "no-preload-908370" | sudo tee /etc/hostname
	I1104 12:08:30.118321   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-908370
	
	I1104 12:08:30.118361   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.121095   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121475   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.121509   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121697   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.121866   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122040   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122176   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.122343   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.122525   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.122547   85500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-908370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-908370/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-908370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:26.557368   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:29.056377   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:28.349510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:30.350431   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:26.937831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.938240   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.438463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.937958   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.437676   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.938298   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.937953   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.438075   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.237340   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:30.237370   85500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:30.237413   85500 buildroot.go:174] setting up certificates
	I1104 12:08:30.237429   85500 provision.go:84] configureAuth start
	I1104 12:08:30.237446   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:30.237725   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:30.240026   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240350   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.240380   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.242777   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243101   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.243119   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243302   85500 provision.go:143] copyHostCerts
	I1104 12:08:30.243358   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:30.243368   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:30.243427   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:30.243532   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:30.243542   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:30.243565   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:30.243635   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:30.243643   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:30.243661   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:30.243719   85500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.no-preload-908370 san=[127.0.0.1 192.168.61.91 localhost minikube no-preload-908370]
	I1104 12:08:30.515270   85500 provision.go:177] copyRemoteCerts
	I1104 12:08:30.515350   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:30.515381   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.518651   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519188   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.519218   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519420   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.519600   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.519777   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.519896   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:30.603170   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:30.626226   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:30.649353   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:08:30.684759   85500 provision.go:87] duration metric: took 447.313588ms to configureAuth
	I1104 12:08:30.684789   85500 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:30.684962   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:30.685029   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.687429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.687815   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.687840   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.688015   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.688192   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688325   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688471   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.688640   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.688830   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.688848   85500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:30.919118   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:30.919142   85500 machine.go:96] duration metric: took 1.041410402s to provisionDockerMachine
	I1104 12:08:30.919156   85500 start.go:293] postStartSetup for "no-preload-908370" (driver="kvm2")
	I1104 12:08:30.919169   85500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:30.919200   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:30.919513   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:30.919538   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.922075   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922485   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.922510   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922615   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.922823   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.922991   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.923107   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.007598   85500 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:31.011558   85500 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:31.011588   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:31.011665   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:31.011766   85500 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:31.011859   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:31.020788   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:31.044379   85500 start.go:296] duration metric: took 125.209775ms for postStartSetup
	I1104 12:08:31.044414   85500 fix.go:56] duration metric: took 19.154609071s for fixHost
	I1104 12:08:31.044442   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.047152   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047426   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.047461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047639   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.047829   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.047976   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.048138   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.048296   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:31.048464   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:31.048474   85500 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:31.157723   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722111.115015995
	
	I1104 12:08:31.157747   85500 fix.go:216] guest clock: 1730722111.115015995
	I1104 12:08:31.157758   85500 fix.go:229] Guest: 2024-11-04 12:08:31.115015995 +0000 UTC Remote: 2024-11-04 12:08:31.044427312 +0000 UTC m=+350.890212897 (delta=70.588683ms)
	I1104 12:08:31.157829   85500 fix.go:200] guest clock delta is within tolerance: 70.588683ms
	I1104 12:08:31.157841   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 19.268070408s
	I1104 12:08:31.157875   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.158131   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:31.160806   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161159   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.161191   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161371   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.161907   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162092   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162174   85500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:31.162217   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.162444   85500 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:31.162470   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.165069   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165316   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165505   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165656   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.165771   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165795   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165842   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166006   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.166024   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166183   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.166327   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166449   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.267746   85500 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:31.273307   85500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:31.410198   85500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:31.416652   85500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:31.416726   85500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:31.432260   85500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:31.432288   85500 start.go:495] detecting cgroup driver to use...
	I1104 12:08:31.432345   85500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:31.453134   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:31.467457   85500 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:31.467516   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:31.481392   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:31.495740   85500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:31.617549   85500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:31.802455   85500 docker.go:233] disabling docker service ...
	I1104 12:08:31.802511   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:31.815534   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:31.827495   85500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:31.938344   85500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:32.042827   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:32.056126   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:32.074274   85500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:08:32.074337   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.084061   85500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:32.084138   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.093533   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.104351   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.113753   85500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:32.123391   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.133089   85500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.149073   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.159888   85500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:32.169208   85500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:32.169279   85500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:32.181319   85500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:32.192472   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:32.300710   85500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:32.386906   85500 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:32.386980   85500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:32.391498   85500 start.go:563] Will wait 60s for crictl version
	I1104 12:08:32.391554   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.395471   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:32.439094   85500 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:32.439168   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.466609   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.499305   85500 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:08:32.500825   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:32.503461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.503827   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:32.503857   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.504039   85500 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:32.508082   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:32.520202   85500 kubeadm.go:883] updating cluster {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:32.520359   85500 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:08:32.520402   85500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:32.553752   85500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:08:32.553781   85500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.553868   85500 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.553853   85500 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.553886   85500 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1104 12:08:32.553925   85500 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.553969   85500 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.553978   85500 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555506   85500 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.555518   85500 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.555510   85500 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.555513   85500 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555591   85500 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.555601   85500 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.555514   85500 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.555658   85500 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1104 12:08:32.706982   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.707334   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.712904   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.721917   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.727829   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.741130   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1104 12:08:32.743716   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.796406   85500 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1104 12:08:32.796448   85500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.796502   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.814658   85500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1104 12:08:32.814697   85500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.814735   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.828308   85500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1104 12:08:32.828362   85500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.828416   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.882090   85500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1104 12:08:32.882140   85500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.882205   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.886473   85500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1104 12:08:32.886518   85500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.886567   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956331   85500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1104 12:08:32.956394   85500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.956414   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.956462   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.956427   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.956521   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.956425   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956506   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061683   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.061723   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061752   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.061790   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.061836   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.061893   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168519   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168596   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.187540   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.188933   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.189015   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.199281   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.285086   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1104 12:08:33.285145   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1104 12:08:33.285245   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:33.285247   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.307647   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1104 12:08:33.307769   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:33.307784   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1104 12:08:33.307818   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.307869   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:33.312697   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1104 12:08:33.312808   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:33.314341   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1104 12:08:33.314358   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314396   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314535   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1104 12:08:33.319449   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1104 12:08:33.319604   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1104 12:08:33.356390   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1104 12:08:33.356478   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1104 12:08:33.356569   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:33.512915   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:31.057314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:33.059599   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:32.350656   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:34.352338   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:31.938577   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.438561   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.938188   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.437856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.938433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.438381   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.938164   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.438120   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.937802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.438365   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.736963   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.42254522s)
	I1104 12:08:35.736994   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1104 12:08:35.737014   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737027   85500 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.380435224s)
	I1104 12:08:35.737058   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1104 12:08:35.737063   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737104   85500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.224165247s)
	I1104 12:08:35.737156   85500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1104 12:08:35.737191   85500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.737267   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:37.693026   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.955928101s)
	I1104 12:08:37.693065   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1104 12:08:37.693086   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:37.693047   85500 ssh_runner.go:235] Completed: which crictl: (1.955763498s)
	I1104 12:08:37.693168   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:37.693131   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:39.156860   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.463570619s)
	I1104 12:08:39.156894   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1104 12:08:39.156922   85500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156930   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.463741565s)
	I1104 12:08:39.156980   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156998   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.625930   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.057567   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.850619   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.851157   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:40.852272   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.938295   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.437646   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.438623   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.938662   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.938048   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.438404   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.938494   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.437875   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.701724   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.544718982s)
	I1104 12:08:42.701751   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1104 12:08:42.701771   85500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701810   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701826   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.544784275s)
	I1104 12:08:42.701912   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:44.666599   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.964646885s)
	I1104 12:08:44.666653   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1104 12:08:44.666723   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.964896366s)
	I1104 12:08:44.666744   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1104 12:08:44.666748   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:44.666765   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.666807   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.671475   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1104 12:08:40.556827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:42.557662   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.058481   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:43.351505   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.851360   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:41.938001   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.438702   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.938239   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.438469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.437744   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.938478   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.437757   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.938035   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.438173   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.627407   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.960571593s)
	I1104 12:08:46.627437   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1104 12:08:46.627473   85500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:46.627537   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:47.273537   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1104 12:08:47.273578   85500 cache_images.go:123] Successfully loaded all cached images
	I1104 12:08:47.273583   85500 cache_images.go:92] duration metric: took 14.719789832s to LoadCachedImages
	I1104 12:08:47.273594   85500 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.31.2 crio true true} ...
	I1104 12:08:47.273686   85500 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-908370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:47.273747   85500 ssh_runner.go:195] Run: crio config
	I1104 12:08:47.319888   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:47.319916   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:47.319929   85500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:47.319952   85500 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-908370 NodeName:no-preload-908370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:08:47.320098   85500 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-908370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:47.320185   85500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:08:47.330284   85500 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:47.330352   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:47.340015   85500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1104 12:08:47.356601   85500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:47.371327   85500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1104 12:08:47.387251   85500 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:47.391041   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:47.402283   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:47.527723   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:47.544017   85500 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370 for IP: 192.168.61.91
	I1104 12:08:47.544041   85500 certs.go:194] generating shared ca certs ...
	I1104 12:08:47.544060   85500 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:47.544244   85500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:47.544309   85500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:47.544322   85500 certs.go:256] generating profile certs ...
	I1104 12:08:47.544412   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.key
	I1104 12:08:47.544485   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key.890cb7f7
	I1104 12:08:47.544522   85500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key
	I1104 12:08:47.544626   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:47.544654   85500 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:47.544663   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:47.544685   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:47.544706   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:47.544726   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:47.544774   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:47.545439   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:47.588488   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:47.631341   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:47.666571   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:47.698703   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 12:08:47.725285   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:08:47.748890   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:47.775589   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:08:47.799507   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:47.823383   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:47.847515   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:47.869937   85500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:47.886413   85500 ssh_runner.go:195] Run: openssl version
	I1104 12:08:47.892041   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:47.901942   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906128   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906182   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.911506   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:47.921614   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:47.932358   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936742   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936801   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.942544   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:47.953063   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:47.963293   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967487   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967547   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.972898   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:47.983089   85500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:47.987532   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:47.993296   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:47.999021   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:48.004741   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:48.010227   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:48.015795   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:48.021356   85500 kubeadm.go:392] StartCluster: {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:48.021431   85500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:48.021471   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.057729   85500 cri.go:89] found id: ""
	I1104 12:08:48.057805   85500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:48.067591   85500 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:48.067610   85500 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:48.067663   85500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:48.076604   85500 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:48.077987   85500 kubeconfig.go:125] found "no-preload-908370" server: "https://192.168.61.91:8443"
	I1104 12:08:48.080042   85500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:48.089796   85500 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.91
	I1104 12:08:48.089826   85500 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:48.089838   85500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:48.089886   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.126920   85500 cri.go:89] found id: ""
	I1104 12:08:48.126998   85500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:48.143409   85500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:48.152783   85500 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:48.152809   85500 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:48.152858   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:48.161458   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:48.161542   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:48.170361   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:48.179217   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:48.179272   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:48.187834   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.196025   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:48.196079   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.204809   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:48.213280   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:48.213338   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:48.222672   85500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:48.232374   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:48.328999   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:49.920988   85500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.591954434s)
	I1104 12:08:49.921028   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.121679   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.181412   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:47.558137   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:49.559576   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:48.349974   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:50.350855   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:46.938016   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.438229   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.437950   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.437785   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.438413   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.938514   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.438658   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.253614   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:50.253693   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.754467   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.254553   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.271229   85500 api_server.go:72] duration metric: took 1.017613016s to wait for apiserver process to appear ...
	I1104 12:08:51.271255   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:51.271278   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:51.271794   85500 api_server.go:269] stopped: https://192.168.61.91:8443/healthz: Get "https://192.168.61.91:8443/healthz": dial tcp 192.168.61.91:8443: connect: connection refused
	I1104 12:08:51.771551   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.499268   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:54.499296   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:54.499310   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.617672   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.617699   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:54.771942   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.776588   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.776615   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:52.056678   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.057081   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:55.272332   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.276594   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:55.276621   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:55.771423   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.776881   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:08:55.783842   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:55.783869   85500 api_server.go:131] duration metric: took 4.512606898s to wait for apiserver health ...
	I1104 12:08:55.783877   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:55.783883   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:55.785665   85500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:52.351019   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.850354   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:51.938323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.438464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.937754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.938586   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.438391   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.938546   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.438433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.787083   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:55.801764   85500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:55.828371   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:55.847602   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:55.847653   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:55.847666   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:55.847679   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:55.847695   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:55.847707   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:55.847724   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:55.847733   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:55.847743   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:55.847753   85500 system_pods.go:74] duration metric: took 19.357387ms to wait for pod list to return data ...
	I1104 12:08:55.847762   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:55.856783   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:55.856820   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:55.856834   85500 node_conditions.go:105] duration metric: took 9.065755ms to run NodePressure ...
	I1104 12:08:55.856856   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:56.143012   85500 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148006   85500 kubeadm.go:739] kubelet initialised
	I1104 12:08:56.148026   85500 kubeadm.go:740] duration metric: took 4.987292ms waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148034   85500 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:56.152359   85500 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.156700   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156725   85500 pod_ready.go:82] duration metric: took 4.341093ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.156734   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156741   85500 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.161402   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161431   85500 pod_ready.go:82] duration metric: took 4.681838ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.161440   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161447   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.165738   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165756   85500 pod_ready.go:82] duration metric: took 4.301197ms for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.165764   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165770   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.232568   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232598   85500 pod_ready.go:82] duration metric: took 66.818411ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.232610   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232620   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.633774   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633804   85500 pod_ready.go:82] duration metric: took 401.173552ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.633815   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633824   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.032392   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032419   85500 pod_ready.go:82] duration metric: took 398.58729ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.032431   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032439   85500 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.431940   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431976   85500 pod_ready.go:82] duration metric: took 399.525162ms for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.431987   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431997   85500 pod_ready.go:39] duration metric: took 1.283953089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:57.432014   85500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:57.444821   85500 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:57.444845   85500 kubeadm.go:597] duration metric: took 9.377227288s to restartPrimaryControlPlane
	I1104 12:08:57.444857   85500 kubeadm.go:394] duration metric: took 9.423506415s to StartCluster
	I1104 12:08:57.444879   85500 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.444965   85500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:57.446715   85500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.446981   85500 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:57.447059   85500 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:57.447172   85500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-908370"
	I1104 12:08:57.447193   85500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-908370"
	W1104 12:08:57.447202   85500 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:57.447207   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:57.447237   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447234   85500 addons.go:69] Setting default-storageclass=true in profile "no-preload-908370"
	I1104 12:08:57.447321   85500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-908370"
	I1104 12:08:57.447222   85500 addons.go:69] Setting metrics-server=true in profile "no-preload-908370"
	I1104 12:08:57.447418   85500 addons.go:234] Setting addon metrics-server=true in "no-preload-908370"
	W1104 12:08:57.447431   85500 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:57.447461   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447708   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447792   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447813   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447748   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447896   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447853   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.449013   85500 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:57.450774   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:57.469657   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I1104 12:08:57.470180   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.470801   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.470830   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.471277   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.471873   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.471924   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.485026   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1104 12:08:57.485330   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1104 12:08:57.485604   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.485772   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.486328   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486363   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486442   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486471   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486735   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.486847   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.487059   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.487337   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.487401   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.490138   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I1104 12:08:57.490611   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.490705   85500 addons.go:234] Setting addon default-storageclass=true in "no-preload-908370"
	W1104 12:08:57.490724   85500 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:57.490748   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.491098   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.491140   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.491153   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.491177   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.491549   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.491718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.493600   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.495883   85500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:57.497200   85500 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.497217   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:57.497245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.500402   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.500934   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.500960   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.501276   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.501483   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.501626   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.501775   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.508615   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I1104 12:08:57.509102   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.509582   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.509606   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.509948   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.510115   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.510809   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1104 12:08:57.511134   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.511818   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.511836   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.511868   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.512486   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.513456   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.513500   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.513921   85500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:57.515417   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:57.515434   85500 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:57.515461   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.518867   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519216   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.519241   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519334   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.519523   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.519662   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.520124   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.529448   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I1104 12:08:57.529979   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.530374   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.530389   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.530756   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.530889   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.532430   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.532832   85500 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.532843   85500 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:57.532857   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.535429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535783   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.535809   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535953   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.536148   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.536245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.536388   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.635571   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:57.654984   85500 node_ready.go:35] waiting up to 6m0s for node "no-preload-908370" to be "Ready" ...
	I1104 12:08:57.722564   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.768850   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.791069   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:57.791090   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:57.875966   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:57.875997   85500 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:57.929834   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:57.929867   85500 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:58.017927   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:58.732204   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732235   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732586   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.732614   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.732624   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732635   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732640   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.733045   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.733108   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.733084   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.736737   85500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014142064s)
	I1104 12:08:58.736783   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.736793   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737035   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737077   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.737090   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.737100   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737737   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.737756   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737770   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.740716   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.740735   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.740963   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.740974   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.740985   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987200   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987227   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987657   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.987667   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.987676   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987685   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987708   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987991   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.988006   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.988018   85500 addons.go:475] Verifying addon metrics-server=true in "no-preload-908370"
	I1104 12:08:58.989756   85500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:58.991022   85500 addons.go:510] duration metric: took 1.54397104s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:59.659284   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.057497   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.057767   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.850793   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.852058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.938312   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.437920   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.937779   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.438511   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.938464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.438108   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.438356   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.158318   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:04.658719   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:05.159526   85500 node_ready.go:49] node "no-preload-908370" has status "Ready":"True"
	I1104 12:09:05.159553   85500 node_ready.go:38] duration metric: took 7.504528904s for node "no-preload-908370" to be "Ready" ...
	I1104 12:09:05.159564   85500 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:09:05.164838   85500 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173888   85500 pod_ready.go:93] pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.173909   85500 pod_ready.go:82] duration metric: took 9.046581ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173919   85500 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:00.556225   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:02.556893   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:05.055827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.351472   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:03.851990   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.938694   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.938445   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.438137   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.937941   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.937760   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.438704   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.680754   85500 pod_ready.go:93] pod "etcd-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.680778   85500 pod_ready.go:82] duration metric: took 506.849735ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.680804   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:07.687108   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:09.687377   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:07.556024   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.055613   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.351230   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:08.351640   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.850364   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.937956   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.438323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.438437   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.937675   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.437868   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.938703   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.438436   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.687315   85500 pod_ready.go:93] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.687338   85500 pod_ready.go:82] duration metric: took 5.006527478s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.687348   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692554   85500 pod_ready.go:93] pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.692583   85500 pod_ready.go:82] duration metric: took 5.227048ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692597   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697109   85500 pod_ready.go:93] pod "kube-proxy-w9hbz" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.697132   85500 pod_ready.go:82] duration metric: took 4.525205ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697153   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701450   85500 pod_ready.go:93] pod "kube-scheduler-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.701472   85500 pod_ready.go:82] duration metric: took 4.310973ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701483   85500 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:12.708631   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.708772   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.056161   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.556380   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.850721   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.851608   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:11.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.437963   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.938515   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.437754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.937856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.438729   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.938439   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.438421   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.938044   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.438456   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.209025   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.707595   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.056226   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.555918   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.350266   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.350329   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:16.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.438266   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.938153   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.437829   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.938469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.438336   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.938284   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.438073   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.937894   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:21.438135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:21.438238   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:21.471463   86402 cri.go:89] found id: ""
	I1104 12:09:21.471495   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.471507   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:21.471515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:21.471568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:21.509336   86402 cri.go:89] found id: ""
	I1104 12:09:21.509363   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.509373   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:21.509381   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:21.509441   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:21.545963   86402 cri.go:89] found id: ""
	I1104 12:09:21.545987   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.545995   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:21.546000   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:21.546059   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:21.580707   86402 cri.go:89] found id: ""
	I1104 12:09:21.580737   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.580748   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:21.580755   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:21.580820   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:21.613763   86402 cri.go:89] found id: ""
	I1104 12:09:21.613791   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.613801   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:21.613809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:21.613872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:21.646559   86402 cri.go:89] found id: ""
	I1104 12:09:21.646583   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.646591   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:21.646597   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:21.646643   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:21.681439   86402 cri.go:89] found id: ""
	I1104 12:09:21.681467   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.681479   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:21.681486   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:21.681554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:21.708045   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.207686   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:22.055637   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.056458   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.350636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:23.850852   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.713875   86402 cri.go:89] found id: ""
	I1104 12:09:21.713899   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.713907   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:21.713915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:21.713925   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:21.763882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:21.763918   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:21.778590   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:21.778615   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:21.892208   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:21.892235   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:21.892250   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:21.965946   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:21.965984   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:24.502992   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:24.514899   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:24.514960   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:24.554466   86402 cri.go:89] found id: ""
	I1104 12:09:24.554491   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.554501   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:24.554510   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:24.554567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:24.591532   86402 cri.go:89] found id: ""
	I1104 12:09:24.591560   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.591572   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:24.591580   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:24.591638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:24.625436   86402 cri.go:89] found id: ""
	I1104 12:09:24.625467   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.625478   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:24.625485   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:24.625544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:24.658317   86402 cri.go:89] found id: ""
	I1104 12:09:24.658346   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.658357   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:24.658364   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:24.658426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:24.692811   86402 cri.go:89] found id: ""
	I1104 12:09:24.692839   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.692850   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:24.692857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:24.692917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:24.729677   86402 cri.go:89] found id: ""
	I1104 12:09:24.729708   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.729719   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:24.729726   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:24.729773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:24.768575   86402 cri.go:89] found id: ""
	I1104 12:09:24.768598   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.768608   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:24.768615   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:24.768681   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:24.802344   86402 cri.go:89] found id: ""
	I1104 12:09:24.802368   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.802375   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:24.802383   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:24.802394   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:24.855882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:24.855915   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:24.869199   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:24.869243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:24.940720   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:24.940744   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:24.940758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:25.016139   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:25.016177   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:26.208422   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.208568   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.557513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:29.055769   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.350171   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.353001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:30.851153   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:27.553297   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:27.566857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:27.566913   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:27.599606   86402 cri.go:89] found id: ""
	I1104 12:09:27.599641   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.599653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:27.599661   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:27.599721   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:27.633818   86402 cri.go:89] found id: ""
	I1104 12:09:27.633841   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.633849   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:27.633854   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:27.633907   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:27.668088   86402 cri.go:89] found id: ""
	I1104 12:09:27.668120   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.668129   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:27.668135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:27.668185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:27.699401   86402 cri.go:89] found id: ""
	I1104 12:09:27.699433   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.699445   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:27.699453   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:27.699511   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:27.731422   86402 cri.go:89] found id: ""
	I1104 12:09:27.731448   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.731459   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:27.731466   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:27.731528   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:27.762808   86402 cri.go:89] found id: ""
	I1104 12:09:27.762839   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.762850   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:27.762857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:27.762917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:27.794729   86402 cri.go:89] found id: ""
	I1104 12:09:27.794757   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.794765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:27.794771   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:27.794826   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:27.825694   86402 cri.go:89] found id: ""
	I1104 12:09:27.825716   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.825724   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:27.825731   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:27.825742   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.862111   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:27.862140   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:27.911169   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:27.911204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:27.924207   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:27.924232   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:27.995123   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:27.995153   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:27.995167   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:30.580831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:30.594901   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:30.594959   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:30.630936   86402 cri.go:89] found id: ""
	I1104 12:09:30.630961   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.630971   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:30.630979   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:30.631034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:30.669288   86402 cri.go:89] found id: ""
	I1104 12:09:30.669311   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.669320   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:30.669328   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:30.669388   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:30.706288   86402 cri.go:89] found id: ""
	I1104 12:09:30.706312   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.706319   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:30.706325   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:30.706384   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:30.739027   86402 cri.go:89] found id: ""
	I1104 12:09:30.739057   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.739069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:30.739078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:30.739137   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:30.772247   86402 cri.go:89] found id: ""
	I1104 12:09:30.772272   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.772280   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:30.772286   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:30.772338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:30.810327   86402 cri.go:89] found id: ""
	I1104 12:09:30.810360   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.810370   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:30.810375   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:30.810426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:30.842241   86402 cri.go:89] found id: ""
	I1104 12:09:30.842271   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.842279   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:30.842285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:30.842332   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:30.877003   86402 cri.go:89] found id: ""
	I1104 12:09:30.877032   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.877043   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:30.877052   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:30.877077   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:30.925783   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:30.925816   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:30.939651   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:30.939680   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:31.029176   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:31.029210   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:31.029244   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:31.116311   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:31.116348   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:30.708451   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:32.708661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:31.056627   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.056743   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.057986   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.350420   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.351206   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.653267   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:33.665813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:33.665878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:33.701812   86402 cri.go:89] found id: ""
	I1104 12:09:33.701839   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.701852   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:33.701860   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:33.701922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:33.738816   86402 cri.go:89] found id: ""
	I1104 12:09:33.738850   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.738861   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:33.738868   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:33.738928   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:33.773936   86402 cri.go:89] found id: ""
	I1104 12:09:33.773960   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.773968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:33.773976   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:33.774031   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:33.808049   86402 cri.go:89] found id: ""
	I1104 12:09:33.808079   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.808091   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:33.808098   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:33.808154   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:33.844276   86402 cri.go:89] found id: ""
	I1104 12:09:33.844303   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.844314   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:33.844322   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:33.844443   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:33.879736   86402 cri.go:89] found id: ""
	I1104 12:09:33.879772   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.879782   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:33.879788   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:33.879843   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:33.913717   86402 cri.go:89] found id: ""
	I1104 12:09:33.913750   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.913761   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:33.913769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:33.913832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:33.949632   86402 cri.go:89] found id: ""
	I1104 12:09:33.949658   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.949667   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:33.949677   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:33.949691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:34.019770   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:34.019790   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:34.019806   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:34.101493   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:34.101524   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:34.146723   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:34.146751   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:34.196295   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:34.196338   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:35.207223   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.207576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.208091   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.556228   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.556548   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.850907   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.852870   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:36.709951   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:36.724723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:36.724782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:36.777406   86402 cri.go:89] found id: ""
	I1104 12:09:36.777440   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.777451   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:36.777459   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:36.777520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:36.834486   86402 cri.go:89] found id: ""
	I1104 12:09:36.834516   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.834527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:36.834535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:36.834641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:36.868828   86402 cri.go:89] found id: ""
	I1104 12:09:36.868853   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.868861   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:36.868867   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:36.868912   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:36.900942   86402 cri.go:89] found id: ""
	I1104 12:09:36.900972   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.900980   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:36.900986   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:36.901043   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:36.933215   86402 cri.go:89] found id: ""
	I1104 12:09:36.933265   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.933276   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:36.933282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:36.933330   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:36.966753   86402 cri.go:89] found id: ""
	I1104 12:09:36.966776   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.966784   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:36.966789   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:36.966850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:37.000050   86402 cri.go:89] found id: ""
	I1104 12:09:37.000074   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.000082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:37.000087   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:37.000144   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:37.033252   86402 cri.go:89] found id: ""
	I1104 12:09:37.033283   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.033295   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:37.033305   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:37.033328   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:37.085351   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:37.085383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:37.098556   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:37.098582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:37.167489   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:37.167512   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:37.167525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:37.243292   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:37.243325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:39.781468   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:39.795630   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:39.795756   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:39.833745   86402 cri.go:89] found id: ""
	I1104 12:09:39.833779   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.833791   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:39.833798   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:39.833862   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:39.870075   86402 cri.go:89] found id: ""
	I1104 12:09:39.870096   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.870106   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:39.870119   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:39.870173   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:39.905807   86402 cri.go:89] found id: ""
	I1104 12:09:39.905836   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.905846   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:39.905854   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:39.905916   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:39.941890   86402 cri.go:89] found id: ""
	I1104 12:09:39.941914   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.941922   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:39.941932   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:39.941978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:39.979123   86402 cri.go:89] found id: ""
	I1104 12:09:39.979150   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.979159   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:39.979165   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:39.979220   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:40.014748   86402 cri.go:89] found id: ""
	I1104 12:09:40.014777   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.014785   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:40.014791   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:40.014882   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:40.049977   86402 cri.go:89] found id: ""
	I1104 12:09:40.050004   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.050014   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:40.050021   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:40.050100   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:40.085630   86402 cri.go:89] found id: ""
	I1104 12:09:40.085663   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.085674   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:40.085685   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:40.085701   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:40.166611   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:40.166650   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:40.203117   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:40.203155   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:40.256233   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:40.256267   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:40.270009   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:40.270042   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:40.338672   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:41.707618   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:43.708915   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.055555   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.060949   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.351562   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.851599   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.839402   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:42.852881   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:42.852947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:42.884587   86402 cri.go:89] found id: ""
	I1104 12:09:42.884614   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.884624   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:42.884631   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:42.884690   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:42.915286   86402 cri.go:89] found id: ""
	I1104 12:09:42.915316   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.915327   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:42.915337   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:42.915399   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:42.945827   86402 cri.go:89] found id: ""
	I1104 12:09:42.945857   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.945868   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:42.945875   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:42.945934   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:42.982662   86402 cri.go:89] found id: ""
	I1104 12:09:42.982693   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.982703   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:42.982712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:42.982788   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:43.015337   86402 cri.go:89] found id: ""
	I1104 12:09:43.015371   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.015382   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:43.015390   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:43.015453   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:43.048235   86402 cri.go:89] found id: ""
	I1104 12:09:43.048262   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.048270   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:43.048276   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:43.048351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:43.080636   86402 cri.go:89] found id: ""
	I1104 12:09:43.080668   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.080679   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:43.080687   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:43.080746   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:43.113986   86402 cri.go:89] found id: ""
	I1104 12:09:43.114011   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.114019   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:43.114027   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:43.114038   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:43.165356   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:43.165390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:43.179167   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:43.179200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:43.250054   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:43.250083   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:43.250098   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:43.328970   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:43.329002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:45.869879   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:45.883262   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:45.883359   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:45.921978   86402 cri.go:89] found id: ""
	I1104 12:09:45.922003   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.922011   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:45.922016   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:45.922076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:45.954668   86402 cri.go:89] found id: ""
	I1104 12:09:45.954697   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.954710   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:45.954717   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:45.954787   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:45.987793   86402 cri.go:89] found id: ""
	I1104 12:09:45.987826   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.987837   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:45.987845   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:45.987906   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:46.028517   86402 cri.go:89] found id: ""
	I1104 12:09:46.028550   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.028558   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:46.028563   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:46.028621   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:46.063832   86402 cri.go:89] found id: ""
	I1104 12:09:46.063859   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.063870   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:46.063878   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:46.063942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:46.099981   86402 cri.go:89] found id: ""
	I1104 12:09:46.100011   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.100027   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:46.100036   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:46.100169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:46.133060   86402 cri.go:89] found id: ""
	I1104 12:09:46.133083   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.133092   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:46.133099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:46.133165   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:46.170559   86402 cri.go:89] found id: ""
	I1104 12:09:46.170583   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.170591   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:46.170599   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:46.170610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:46.253202   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:46.253253   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:46.288468   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:46.288498   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:46.339322   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:46.339354   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:46.353020   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:46.353049   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:46.420328   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:46.208695   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:46.556598   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.057461   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:47.351225   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.352737   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.920709   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:48.933443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:48.933507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:48.964736   86402 cri.go:89] found id: ""
	I1104 12:09:48.964759   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.964770   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:48.964777   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:48.964837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:48.996646   86402 cri.go:89] found id: ""
	I1104 12:09:48.996670   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.996679   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:48.996684   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:48.996734   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:49.028899   86402 cri.go:89] found id: ""
	I1104 12:09:49.028942   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.028951   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:49.028957   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:49.029015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:49.065032   86402 cri.go:89] found id: ""
	I1104 12:09:49.065056   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.065064   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:49.065075   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:49.065120   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:49.097159   86402 cri.go:89] found id: ""
	I1104 12:09:49.097183   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.097191   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:49.097196   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:49.097269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:49.131578   86402 cri.go:89] found id: ""
	I1104 12:09:49.131608   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.131619   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:49.131626   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:49.131684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:49.164307   86402 cri.go:89] found id: ""
	I1104 12:09:49.164339   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.164358   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:49.164367   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:49.164430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:49.197171   86402 cri.go:89] found id: ""
	I1104 12:09:49.197199   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.197210   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:49.197220   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:49.197251   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:49.210327   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:49.210355   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:49.280226   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:49.280251   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:49.280262   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:49.367655   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:49.367691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:49.408424   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:49.408452   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:50.708963   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:53.207337   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.555800   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.055622   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.850949   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.350551   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.958148   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:51.970451   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:51.970521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:52.000896   86402 cri.go:89] found id: ""
	I1104 12:09:52.000929   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.000940   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:52.000948   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:52.001023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:52.034122   86402 cri.go:89] found id: ""
	I1104 12:09:52.034150   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.034161   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:52.034168   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:52.034227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:52.070834   86402 cri.go:89] found id: ""
	I1104 12:09:52.070872   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.070884   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:52.070891   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:52.070950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:52.103730   86402 cri.go:89] found id: ""
	I1104 12:09:52.103758   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.103766   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:52.103772   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:52.103832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:52.135980   86402 cri.go:89] found id: ""
	I1104 12:09:52.136006   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.136014   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:52.136020   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:52.136081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:52.168903   86402 cri.go:89] found id: ""
	I1104 12:09:52.168928   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.168936   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:52.168942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:52.169001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:52.199499   86402 cri.go:89] found id: ""
	I1104 12:09:52.199529   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.199539   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:52.199546   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:52.199610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:52.232566   86402 cri.go:89] found id: ""
	I1104 12:09:52.232603   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.232615   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:52.232626   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:52.232640   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:52.282140   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:52.282180   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:52.295079   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:52.295110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:52.364061   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:52.364087   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:52.364102   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:52.437868   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:52.437901   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:54.978182   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:54.991002   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:54.991068   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:55.023628   86402 cri.go:89] found id: ""
	I1104 12:09:55.023656   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.023663   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:55.023669   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:55.023715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:55.058524   86402 cri.go:89] found id: ""
	I1104 12:09:55.058548   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.058557   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:55.058564   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:55.058634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:55.095730   86402 cri.go:89] found id: ""
	I1104 12:09:55.095760   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.095772   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:55.095779   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:55.095837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:55.128341   86402 cri.go:89] found id: ""
	I1104 12:09:55.128365   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.128373   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:55.128379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:55.128438   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:55.160655   86402 cri.go:89] found id: ""
	I1104 12:09:55.160681   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.160693   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:55.160700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:55.160754   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:55.194050   86402 cri.go:89] found id: ""
	I1104 12:09:55.194077   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.194086   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:55.194091   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:55.194138   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:55.227655   86402 cri.go:89] found id: ""
	I1104 12:09:55.227694   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.227705   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:55.227712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:55.227810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:55.261106   86402 cri.go:89] found id: ""
	I1104 12:09:55.261137   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.261147   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:55.261157   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:55.261171   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:55.335577   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:55.335598   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:55.335610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:55.421339   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:55.421375   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:55.459936   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:55.459967   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:55.509346   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:55.509382   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:55.208869   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:57.707576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:59.708019   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.555996   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.556335   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.851071   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.851254   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.023608   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:58.036540   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:58.036599   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:58.075104   86402 cri.go:89] found id: ""
	I1104 12:09:58.075182   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.075198   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:58.075207   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:58.075271   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:58.109910   86402 cri.go:89] found id: ""
	I1104 12:09:58.109949   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.109961   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:58.109968   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:58.110038   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:58.142829   86402 cri.go:89] found id: ""
	I1104 12:09:58.142854   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.142865   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:58.142873   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:58.142924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:58.178125   86402 cri.go:89] found id: ""
	I1104 12:09:58.178153   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.178161   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:58.178168   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:58.178239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:58.214117   86402 cri.go:89] found id: ""
	I1104 12:09:58.214146   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.214156   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:58.214162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:58.214213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:58.244728   86402 cri.go:89] found id: ""
	I1104 12:09:58.244751   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.244759   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:58.244765   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:58.244809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:58.275542   86402 cri.go:89] found id: ""
	I1104 12:09:58.275568   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.275576   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:58.275582   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:58.275630   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:58.314909   86402 cri.go:89] found id: ""
	I1104 12:09:58.314935   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.314943   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:58.314952   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:58.314962   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:58.364361   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:58.364390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.378483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:58.378517   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:58.442012   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:58.442033   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:58.442045   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:58.517260   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:58.517298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.057203   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:01.069937   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:01.070008   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:01.101672   86402 cri.go:89] found id: ""
	I1104 12:10:01.101698   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.101709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:01.101716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:01.101779   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:01.134672   86402 cri.go:89] found id: ""
	I1104 12:10:01.134701   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.134712   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:01.134719   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:01.134789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:01.167784   86402 cri.go:89] found id: ""
	I1104 12:10:01.167833   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.167845   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:01.167853   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:01.167945   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:01.201218   86402 cri.go:89] found id: ""
	I1104 12:10:01.201260   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.201271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:01.201281   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:01.201338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:01.234964   86402 cri.go:89] found id: ""
	I1104 12:10:01.234991   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.235000   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:01.235007   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:01.235069   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:01.267809   86402 cri.go:89] found id: ""
	I1104 12:10:01.267848   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.267881   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:01.267890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:01.267942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:01.303567   86402 cri.go:89] found id: ""
	I1104 12:10:01.303590   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.303598   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:01.303604   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:01.303648   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:01.342059   86402 cri.go:89] found id: ""
	I1104 12:10:01.342088   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.342099   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:01.342109   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:01.342142   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:01.354845   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:01.354867   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:01.423426   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:01.423447   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:01.423459   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:01.498979   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:01.499018   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.537658   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:01.537691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:02.208192   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.209058   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.055266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.056457   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.350820   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.850435   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.088653   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:04.103506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:04.103576   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:04.137574   86402 cri.go:89] found id: ""
	I1104 12:10:04.137602   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.137612   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:04.137620   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:04.137684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:04.177624   86402 cri.go:89] found id: ""
	I1104 12:10:04.177662   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.177673   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:04.177681   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:04.177750   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:04.213829   86402 cri.go:89] found id: ""
	I1104 12:10:04.213850   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.213862   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:04.213870   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:04.213929   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:04.251112   86402 cri.go:89] found id: ""
	I1104 12:10:04.251143   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.251154   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:04.251162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:04.251227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:04.286005   86402 cri.go:89] found id: ""
	I1104 12:10:04.286036   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.286046   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:04.286053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:04.286118   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:04.317628   86402 cri.go:89] found id: ""
	I1104 12:10:04.317656   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.317667   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:04.317674   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:04.317742   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:04.351663   86402 cri.go:89] found id: ""
	I1104 12:10:04.351687   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.351695   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:04.351700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:04.351755   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:04.385818   86402 cri.go:89] found id: ""
	I1104 12:10:04.385842   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.385850   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:04.385858   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:04.385880   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:04.467141   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:04.467179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:04.503669   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:04.503700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.557237   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:04.557303   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:04.570484   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:04.570520   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:04.635099   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:06.708483   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:09.207171   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:05.556612   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.056976   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:06.350422   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.351537   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.351962   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:07.135741   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:07.148039   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:07.148132   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:07.185171   86402 cri.go:89] found id: ""
	I1104 12:10:07.185196   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.185205   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:07.185211   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:07.185280   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:07.217097   86402 cri.go:89] found id: ""
	I1104 12:10:07.217126   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.217137   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:07.217144   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:07.217204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:07.250079   86402 cri.go:89] found id: ""
	I1104 12:10:07.250108   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.250116   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:07.250121   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:07.250169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:07.283423   86402 cri.go:89] found id: ""
	I1104 12:10:07.283463   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.283475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:07.283482   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:07.283554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:07.316461   86402 cri.go:89] found id: ""
	I1104 12:10:07.316490   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.316507   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:07.316513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:07.316569   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:07.361981   86402 cri.go:89] found id: ""
	I1104 12:10:07.362010   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.362018   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:07.362024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:07.362087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:07.397834   86402 cri.go:89] found id: ""
	I1104 12:10:07.397867   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.397878   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:07.397886   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:07.397948   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:07.429379   86402 cri.go:89] found id: ""
	I1104 12:10:07.429407   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.429416   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:07.429425   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:07.429438   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:07.495294   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.495322   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:07.495334   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:07.578504   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:07.578546   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:07.617172   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:07.617201   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:07.667168   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:07.667204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.181802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:10.196017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:10.196084   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:10.228243   86402 cri.go:89] found id: ""
	I1104 12:10:10.228272   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.228282   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:10.228289   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:10.228347   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:10.262110   86402 cri.go:89] found id: ""
	I1104 12:10:10.262143   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.262152   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:10.262161   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:10.262218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:10.297776   86402 cri.go:89] found id: ""
	I1104 12:10:10.297812   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.297823   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:10.297830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:10.297877   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:10.332645   86402 cri.go:89] found id: ""
	I1104 12:10:10.332672   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.332680   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:10.332685   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:10.332730   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:10.366703   86402 cri.go:89] found id: ""
	I1104 12:10:10.366735   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.366746   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:10.366754   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:10.366809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:10.399500   86402 cri.go:89] found id: ""
	I1104 12:10:10.399526   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.399534   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:10.399539   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:10.399634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:10.434898   86402 cri.go:89] found id: ""
	I1104 12:10:10.434932   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.434943   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:10.434951   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:10.435022   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:10.472159   86402 cri.go:89] found id: ""
	I1104 12:10:10.472189   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.472201   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:10.472225   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:10.472246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:10.528710   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:10.528769   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.541943   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:10.541973   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:10.621819   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:10.621843   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:10.621855   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:10.698301   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:10.698335   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:11.208069   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.707594   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.556520   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.056160   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:15.056984   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:12.851001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:14.851591   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.235151   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:13.247511   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:13.247585   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:13.278546   86402 cri.go:89] found id: ""
	I1104 12:10:13.278576   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.278586   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:13.278592   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:13.278655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:13.310297   86402 cri.go:89] found id: ""
	I1104 12:10:13.310325   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.310334   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:13.310340   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:13.310394   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:13.344110   86402 cri.go:89] found id: ""
	I1104 12:10:13.344139   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.344150   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:13.344158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:13.344210   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:13.379778   86402 cri.go:89] found id: ""
	I1104 12:10:13.379806   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.379817   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:13.379824   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:13.379872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:13.411763   86402 cri.go:89] found id: ""
	I1104 12:10:13.411795   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.411806   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:13.411813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:13.411872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:13.445192   86402 cri.go:89] found id: ""
	I1104 12:10:13.445217   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.445235   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:13.445243   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:13.445297   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:13.478518   86402 cri.go:89] found id: ""
	I1104 12:10:13.478549   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.478561   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:13.478569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:13.478710   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:13.513852   86402 cri.go:89] found id: ""
	I1104 12:10:13.513878   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.513886   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:13.513895   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:13.513909   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:13.590413   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:13.590439   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:13.590454   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:13.664575   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:13.664608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.700616   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:13.700644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:13.751113   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:13.751147   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:16.264311   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:16.277443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:16.277508   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:16.309983   86402 cri.go:89] found id: ""
	I1104 12:10:16.310010   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.310020   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:16.310025   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:16.310073   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:16.358281   86402 cri.go:89] found id: ""
	I1104 12:10:16.358305   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.358312   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:16.358317   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:16.358376   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:16.394455   86402 cri.go:89] found id: ""
	I1104 12:10:16.394485   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.394497   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:16.394503   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:16.394571   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:16.430606   86402 cri.go:89] found id: ""
	I1104 12:10:16.430638   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.430648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:16.430655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:16.430716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:16.464402   86402 cri.go:89] found id: ""
	I1104 12:10:16.464439   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.464450   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:16.464458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:16.464517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:16.497985   86402 cri.go:89] found id: ""
	I1104 12:10:16.498009   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.498017   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:16.498022   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:16.498076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:16.531255   86402 cri.go:89] found id: ""
	I1104 12:10:16.531289   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.531301   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:16.531309   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:16.531372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:16.566176   86402 cri.go:89] found id: ""
	I1104 12:10:16.566204   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.566213   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:16.566228   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:16.566243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:16.634157   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:16.634196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:16.634218   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:16.206939   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:18.208360   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.555513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.556105   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.351026   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.351294   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:16.710518   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:16.710550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:16.746572   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:16.746608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:16.797146   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:16.797179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.310286   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:19.323409   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:19.323473   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:19.360864   86402 cri.go:89] found id: ""
	I1104 12:10:19.360893   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.360902   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:19.360907   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:19.360962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:19.400127   86402 cri.go:89] found id: ""
	I1104 12:10:19.400155   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.400167   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:19.400174   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:19.400230   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:19.433023   86402 cri.go:89] found id: ""
	I1104 12:10:19.433049   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.433057   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:19.433062   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:19.433123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:19.467786   86402 cri.go:89] found id: ""
	I1104 12:10:19.467810   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.467819   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:19.467825   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:19.467875   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:19.498411   86402 cri.go:89] found id: ""
	I1104 12:10:19.498436   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.498444   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:19.498455   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:19.498502   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:19.532146   86402 cri.go:89] found id: ""
	I1104 12:10:19.532171   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.532179   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:19.532184   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:19.532234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:19.567271   86402 cri.go:89] found id: ""
	I1104 12:10:19.567294   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.567302   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:19.567308   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:19.567369   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:19.608233   86402 cri.go:89] found id: ""
	I1104 12:10:19.608265   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.608279   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:19.608289   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:19.608304   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:19.649039   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:19.649071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:19.702129   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:19.702168   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.716749   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:19.716776   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:19.787538   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:19.787560   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:19.787572   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:20.208694   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.708289   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.556715   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.557173   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.851010   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.852944   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.368982   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:22.382889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:22.382962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:22.418672   86402 cri.go:89] found id: ""
	I1104 12:10:22.418698   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.418709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:22.418716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:22.418782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:22.451675   86402 cri.go:89] found id: ""
	I1104 12:10:22.451704   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.451715   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:22.451723   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:22.451785   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:22.488520   86402 cri.go:89] found id: ""
	I1104 12:10:22.488549   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.488561   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:22.488567   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:22.488631   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:22.530288   86402 cri.go:89] found id: ""
	I1104 12:10:22.530312   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.530321   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:22.530326   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:22.530382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:22.564929   86402 cri.go:89] found id: ""
	I1104 12:10:22.564958   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.564970   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:22.564977   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:22.565036   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:22.598015   86402 cri.go:89] found id: ""
	I1104 12:10:22.598042   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.598051   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:22.598056   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:22.598160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:22.632894   86402 cri.go:89] found id: ""
	I1104 12:10:22.632921   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.632930   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:22.632935   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:22.633001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:22.665194   86402 cri.go:89] found id: ""
	I1104 12:10:22.665218   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.665245   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:22.665257   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:22.665272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:22.717731   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:22.717763   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:22.732671   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:22.732698   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:22.823908   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:22.823946   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:22.823963   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.907812   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:22.907848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.449308   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:25.461694   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:25.461751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:25.493036   86402 cri.go:89] found id: ""
	I1104 12:10:25.493061   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.493068   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:25.493075   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:25.493122   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:25.525084   86402 cri.go:89] found id: ""
	I1104 12:10:25.525116   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.525128   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:25.525135   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:25.525196   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:25.561380   86402 cri.go:89] found id: ""
	I1104 12:10:25.561424   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.561436   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:25.561444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:25.561499   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:25.595429   86402 cri.go:89] found id: ""
	I1104 12:10:25.595453   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.595468   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:25.595474   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:25.595521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:25.627409   86402 cri.go:89] found id: ""
	I1104 12:10:25.627436   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.627445   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:25.627450   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:25.627497   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:25.661048   86402 cri.go:89] found id: ""
	I1104 12:10:25.661073   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.661082   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:25.661088   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:25.661135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:25.698882   86402 cri.go:89] found id: ""
	I1104 12:10:25.698912   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.698920   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:25.698926   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:25.698978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:25.733355   86402 cri.go:89] found id: ""
	I1104 12:10:25.733397   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.733409   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:25.733420   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:25.733435   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:25.784871   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:25.784908   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:25.798715   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:25.798740   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:25.870362   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:25.870383   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:25.870397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:25.950565   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:25.950598   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.209496   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:27.706991   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:29.708209   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.055597   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.055845   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:30.056584   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.351027   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.851204   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.488258   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:28.506058   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:28.506114   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:28.566325   86402 cri.go:89] found id: ""
	I1104 12:10:28.566351   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.566358   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:28.566364   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:28.566413   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:28.612753   86402 cri.go:89] found id: ""
	I1104 12:10:28.612781   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.612790   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:28.612796   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:28.612854   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:28.647082   86402 cri.go:89] found id: ""
	I1104 12:10:28.647109   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.647120   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:28.647128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:28.647205   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:28.683197   86402 cri.go:89] found id: ""
	I1104 12:10:28.683227   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.683239   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:28.683247   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:28.683299   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:28.718139   86402 cri.go:89] found id: ""
	I1104 12:10:28.718175   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.718186   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:28.718194   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:28.718253   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:28.749689   86402 cri.go:89] found id: ""
	I1104 12:10:28.749721   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.749732   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:28.749739   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:28.749803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:28.786824   86402 cri.go:89] found id: ""
	I1104 12:10:28.786851   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.786859   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:28.786864   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:28.786925   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:28.822833   86402 cri.go:89] found id: ""
	I1104 12:10:28.822856   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.822865   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:28.822872   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:28.822884   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:28.835267   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:28.835298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:28.900051   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:28.900076   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:28.900089   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:28.979867   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:28.979912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:29.017294   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:29.017327   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:31.569559   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:31.582065   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:31.582136   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:31.614924   86402 cri.go:89] found id: ""
	I1104 12:10:31.614952   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.614960   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:31.614966   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:31.615029   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:31.647178   86402 cri.go:89] found id: ""
	I1104 12:10:31.647204   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.647212   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:31.647218   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:31.647277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:31.678723   86402 cri.go:89] found id: ""
	I1104 12:10:31.678749   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.678761   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:31.678769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:31.678819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:31.709787   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.208234   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:32.555978   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.557026   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.351700   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:33.850976   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:35.851636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.713013   86402 cri.go:89] found id: ""
	I1104 12:10:31.713036   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.713043   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:31.713048   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:31.713092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:31.746564   86402 cri.go:89] found id: ""
	I1104 12:10:31.746591   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.746600   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:31.746605   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:31.746658   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:31.779559   86402 cri.go:89] found id: ""
	I1104 12:10:31.779586   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.779594   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:31.779601   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:31.779652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:31.812047   86402 cri.go:89] found id: ""
	I1104 12:10:31.812076   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.812087   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:31.812094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:31.812163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:31.845479   86402 cri.go:89] found id: ""
	I1104 12:10:31.845510   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.845522   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:31.845532   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:31.845551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:31.909399   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:31.909423   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:31.909434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:31.985994   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:31.986031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:32.023222   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:32.023255   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:32.074429   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:32.074467   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.588202   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:34.600925   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:34.600994   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:34.632718   86402 cri.go:89] found id: ""
	I1104 12:10:34.632743   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.632754   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:34.632763   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:34.632813   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:34.665553   86402 cri.go:89] found id: ""
	I1104 12:10:34.665576   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.665585   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:34.665590   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:34.665641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:34.700059   86402 cri.go:89] found id: ""
	I1104 12:10:34.700081   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.700089   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:34.700094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:34.700141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:34.732940   86402 cri.go:89] found id: ""
	I1104 12:10:34.732962   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.732970   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:34.732978   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:34.733023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:34.764580   86402 cri.go:89] found id: ""
	I1104 12:10:34.764610   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.764618   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:34.764624   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:34.764680   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:34.798030   86402 cri.go:89] found id: ""
	I1104 12:10:34.798053   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.798061   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:34.798067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:34.798115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:34.829847   86402 cri.go:89] found id: ""
	I1104 12:10:34.829876   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.829884   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:34.829889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:34.829946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:34.862764   86402 cri.go:89] found id: ""
	I1104 12:10:34.862792   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.862804   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:34.862815   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:34.862828   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:34.912367   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:34.912397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.925347   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:34.925383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:34.990459   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:34.990486   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:34.990502   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:35.066765   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:35.066796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:36.706912   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.707144   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.056279   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:39.555433   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.349986   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:40.354694   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.602696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:37.615041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:37.615115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:37.646872   86402 cri.go:89] found id: ""
	I1104 12:10:37.646900   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.646911   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:37.646918   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:37.646977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:37.679770   86402 cri.go:89] found id: ""
	I1104 12:10:37.679797   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.679805   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:37.679810   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:37.679867   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:37.711693   86402 cri.go:89] found id: ""
	I1104 12:10:37.711720   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.711733   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:37.711743   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:37.711803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:37.746605   86402 cri.go:89] found id: ""
	I1104 12:10:37.746636   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.746648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:37.746656   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:37.746716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:37.778983   86402 cri.go:89] found id: ""
	I1104 12:10:37.779010   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.779020   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:37.779026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:37.779086   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:37.813293   86402 cri.go:89] found id: ""
	I1104 12:10:37.813321   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.813330   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:37.813335   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:37.813387   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:37.846181   86402 cri.go:89] found id: ""
	I1104 12:10:37.846209   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.846219   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:37.846226   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:37.846287   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:37.877485   86402 cri.go:89] found id: ""
	I1104 12:10:37.877520   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.877531   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:37.877541   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:37.877558   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:37.926704   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:37.926733   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:37.939771   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:37.939796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:38.003762   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:38.003783   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:38.003800   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:38.085419   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:38.085456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.625351   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:40.637380   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:40.637459   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:40.670274   86402 cri.go:89] found id: ""
	I1104 12:10:40.670303   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.670315   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:40.670322   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:40.670382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:40.703383   86402 cri.go:89] found id: ""
	I1104 12:10:40.703414   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.703427   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:40.703434   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:40.703481   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:40.739549   86402 cri.go:89] found id: ""
	I1104 12:10:40.739576   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.739586   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:40.739594   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:40.739651   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:40.775466   86402 cri.go:89] found id: ""
	I1104 12:10:40.775492   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.775502   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:40.775513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:40.775567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:40.810486   86402 cri.go:89] found id: ""
	I1104 12:10:40.810515   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.810525   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:40.810533   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:40.810593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:40.844277   86402 cri.go:89] found id: ""
	I1104 12:10:40.844309   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.844321   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:40.844329   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:40.844391   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:40.878699   86402 cri.go:89] found id: ""
	I1104 12:10:40.878728   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.878739   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:40.878746   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:40.878804   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:40.913888   86402 cri.go:89] found id: ""
	I1104 12:10:40.913913   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.913921   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:40.913929   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:40.913939   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:40.966854   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:40.966892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:40.980483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:40.980510   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:41.046059   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:41.046085   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:41.046100   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:41.129746   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:41.129779   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.707964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.207804   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.057019   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.555947   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.850057   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.851467   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.667029   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:43.680024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:43.680092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:43.714185   86402 cri.go:89] found id: ""
	I1104 12:10:43.714218   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.714227   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:43.714235   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:43.714294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:43.749493   86402 cri.go:89] found id: ""
	I1104 12:10:43.749515   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.749523   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:43.749529   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:43.749588   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:43.785400   86402 cri.go:89] found id: ""
	I1104 12:10:43.785426   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.785437   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:43.785444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:43.785507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:43.818465   86402 cri.go:89] found id: ""
	I1104 12:10:43.818505   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.818517   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:43.818524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:43.818573   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:43.850232   86402 cri.go:89] found id: ""
	I1104 12:10:43.850262   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.850272   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:43.850279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:43.850337   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:43.882806   86402 cri.go:89] found id: ""
	I1104 12:10:43.882840   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.882851   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:43.882859   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:43.882920   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:43.919449   86402 cri.go:89] found id: ""
	I1104 12:10:43.919476   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.919486   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:43.919493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:43.919556   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:43.953761   86402 cri.go:89] found id: ""
	I1104 12:10:43.953791   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.953801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:43.953812   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:43.953825   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:44.005559   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:44.005594   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:44.019431   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:44.019456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:44.094436   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:44.094457   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:44.094470   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:44.174026   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:44.174061   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:45.707449   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:47.709901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.557050   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:48.557552   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.851720   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:49.350269   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.712021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:46.724258   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:46.724318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:46.754472   86402 cri.go:89] found id: ""
	I1104 12:10:46.754501   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.754510   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:46.754515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:46.754563   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:46.790184   86402 cri.go:89] found id: ""
	I1104 12:10:46.790209   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.790219   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:46.790226   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:46.790284   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:46.824840   86402 cri.go:89] found id: ""
	I1104 12:10:46.824865   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.824875   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:46.824882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:46.824952   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:46.857295   86402 cri.go:89] found id: ""
	I1104 12:10:46.857329   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.857360   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:46.857369   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:46.857430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:46.889540   86402 cri.go:89] found id: ""
	I1104 12:10:46.889571   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.889582   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:46.889588   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:46.889652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:46.930165   86402 cri.go:89] found id: ""
	I1104 12:10:46.930195   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.930204   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:46.930210   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:46.930266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:46.965964   86402 cri.go:89] found id: ""
	I1104 12:10:46.965994   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.966006   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:46.966013   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:46.966060   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:47.002700   86402 cri.go:89] found id: ""
	I1104 12:10:47.002732   86402 logs.go:282] 0 containers: []
	W1104 12:10:47.002741   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:47.002749   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:47.002760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:47.056362   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:47.056392   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:47.070447   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:47.070472   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:47.143207   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:47.143240   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:47.143256   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:47.223985   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:47.224015   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:49.765870   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:49.778288   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:49.778352   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:49.812012   86402 cri.go:89] found id: ""
	I1104 12:10:49.812044   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.812054   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:49.812064   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:49.812115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:49.847260   86402 cri.go:89] found id: ""
	I1104 12:10:49.847290   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.847301   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:49.847308   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:49.847361   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:49.877397   86402 cri.go:89] found id: ""
	I1104 12:10:49.877419   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.877427   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:49.877432   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:49.877486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:49.912453   86402 cri.go:89] found id: ""
	I1104 12:10:49.912484   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.912499   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:49.912506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:49.912572   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:49.948374   86402 cri.go:89] found id: ""
	I1104 12:10:49.948404   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.948416   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:49.948422   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:49.948488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:49.982190   86402 cri.go:89] found id: ""
	I1104 12:10:49.982216   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.982228   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:49.982236   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:49.982294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:50.014396   86402 cri.go:89] found id: ""
	I1104 12:10:50.014426   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.014437   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:50.014445   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:50.014507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:50.051770   86402 cri.go:89] found id: ""
	I1104 12:10:50.051793   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.051801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:50.051809   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:50.051820   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:50.116158   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:50.116185   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:50.116202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:50.194382   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:50.194431   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:50.235957   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:50.235983   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:50.290720   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:50.290750   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:50.207837   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.207972   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.208026   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.055965   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:53.056014   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:55.056318   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.850513   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.351193   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.805144   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:52.817686   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:52.817753   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:52.852470   86402 cri.go:89] found id: ""
	I1104 12:10:52.852492   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.852546   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:52.852559   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:52.852603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:52.889682   86402 cri.go:89] found id: ""
	I1104 12:10:52.889705   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.889714   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:52.889720   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:52.889773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:52.924490   86402 cri.go:89] found id: ""
	I1104 12:10:52.924525   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.924537   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:52.924544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:52.924604   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:52.957055   86402 cri.go:89] found id: ""
	I1104 12:10:52.957085   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.957094   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:52.957099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:52.957143   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:52.993379   86402 cri.go:89] found id: ""
	I1104 12:10:52.993411   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.993423   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:52.993430   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:52.993493   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:53.027365   86402 cri.go:89] found id: ""
	I1104 12:10:53.027398   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.027407   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:53.027412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:53.027488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:53.061048   86402 cri.go:89] found id: ""
	I1104 12:10:53.061074   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.061082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:53.061089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:53.061163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:53.101867   86402 cri.go:89] found id: ""
	I1104 12:10:53.101894   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.101904   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:53.101915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:53.101927   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:53.152314   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:53.152351   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:53.165630   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:53.165657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:53.239717   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:53.239739   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:53.239753   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:53.318140   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:53.318186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:55.857443   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:55.869524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:55.869608   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:55.900719   86402 cri.go:89] found id: ""
	I1104 12:10:55.900743   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.900753   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:55.900761   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:55.900821   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:55.932699   86402 cri.go:89] found id: ""
	I1104 12:10:55.932724   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.932734   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:55.932741   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:55.932798   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:55.964729   86402 cri.go:89] found id: ""
	I1104 12:10:55.964758   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.964767   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:55.964775   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:55.964823   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:55.997870   86402 cri.go:89] found id: ""
	I1104 12:10:55.997897   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.997907   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:55.997915   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:55.997977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:56.031707   86402 cri.go:89] found id: ""
	I1104 12:10:56.031736   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.031744   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:56.031749   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:56.031805   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:56.070839   86402 cri.go:89] found id: ""
	I1104 12:10:56.070863   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.070871   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:56.070877   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:56.070922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:56.109364   86402 cri.go:89] found id: ""
	I1104 12:10:56.109393   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.109404   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:56.109412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:56.109474   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:56.143369   86402 cri.go:89] found id: ""
	I1104 12:10:56.143402   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.143414   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:56.143424   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:56.143437   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:56.156924   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:56.156952   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:56.223624   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:56.223647   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:56.223659   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:56.302040   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:56.302082   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:56.343102   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:56.343150   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:56.209085   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.712250   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:57.056463   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:59.555744   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:56.850242   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.850955   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.896551   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:58.909034   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:58.909110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:58.944520   86402 cri.go:89] found id: ""
	I1104 12:10:58.944550   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.944559   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:58.944565   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:58.944612   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:58.980137   86402 cri.go:89] found id: ""
	I1104 12:10:58.980167   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.980176   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:58.980181   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:58.980231   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:59.014505   86402 cri.go:89] found id: ""
	I1104 12:10:59.014536   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.014545   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:59.014551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:59.014602   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:59.050616   86402 cri.go:89] found id: ""
	I1104 12:10:59.050642   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.050652   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:59.050659   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:59.050718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:59.084328   86402 cri.go:89] found id: ""
	I1104 12:10:59.084358   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.084369   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:59.084376   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:59.084449   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:59.116607   86402 cri.go:89] found id: ""
	I1104 12:10:59.116633   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.116642   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:59.116649   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:59.116711   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:59.149727   86402 cri.go:89] found id: ""
	I1104 12:10:59.149754   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.149765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:59.149773   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:59.149832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:59.182992   86402 cri.go:89] found id: ""
	I1104 12:10:59.183023   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.183035   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:59.183045   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:59.183059   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:59.234826   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:59.234862   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:59.248401   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:59.248427   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:59.317143   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:59.317171   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:59.317186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:59.397294   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:59.397336   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:01.208022   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.707297   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.556680   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:04.055902   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.350865   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.850510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.933617   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:01.946458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:01.946537   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:01.981652   86402 cri.go:89] found id: ""
	I1104 12:11:01.981682   86402 logs.go:282] 0 containers: []
	W1104 12:11:01.981693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:01.981701   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:01.981757   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:02.014245   86402 cri.go:89] found id: ""
	I1104 12:11:02.014273   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.014282   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:02.014287   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:02.014350   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:02.047386   86402 cri.go:89] found id: ""
	I1104 12:11:02.047409   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.047420   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:02.047427   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:02.047488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:02.086427   86402 cri.go:89] found id: ""
	I1104 12:11:02.086464   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.086475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:02.086483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:02.086544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:02.120219   86402 cri.go:89] found id: ""
	I1104 12:11:02.120246   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.120255   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:02.120260   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:02.120318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:02.153832   86402 cri.go:89] found id: ""
	I1104 12:11:02.153864   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.153876   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:02.153884   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:02.153950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:02.186237   86402 cri.go:89] found id: ""
	I1104 12:11:02.186266   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.186278   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:02.186285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:02.186351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:02.219238   86402 cri.go:89] found id: ""
	I1104 12:11:02.219269   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.219280   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:02.219290   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:02.219301   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:02.301062   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:02.301099   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:02.358585   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:02.358617   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:02.414153   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:02.414200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:02.428429   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:02.428456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:02.497040   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:04.998089   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:05.010890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:05.010947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:05.046483   86402 cri.go:89] found id: ""
	I1104 12:11:05.046513   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.046523   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:05.046534   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:05.046594   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:05.079487   86402 cri.go:89] found id: ""
	I1104 12:11:05.079516   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.079527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:05.079535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:05.079595   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:05.110968   86402 cri.go:89] found id: ""
	I1104 12:11:05.110997   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.111004   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:05.111010   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:05.111057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:05.143372   86402 cri.go:89] found id: ""
	I1104 12:11:05.143398   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.143408   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:05.143415   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:05.143484   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:05.174691   86402 cri.go:89] found id: ""
	I1104 12:11:05.174717   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.174730   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:05.174737   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:05.174802   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:05.210005   86402 cri.go:89] found id: ""
	I1104 12:11:05.210025   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.210033   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:05.210041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:05.210085   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:05.244874   86402 cri.go:89] found id: ""
	I1104 12:11:05.244899   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.244908   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:05.244913   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:05.244956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:05.276517   86402 cri.go:89] found id: ""
	I1104 12:11:05.276547   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.276557   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:05.276568   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:05.276581   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:05.354057   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:05.354087   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:05.390848   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:05.390887   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:05.442659   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:05.442692   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:05.456290   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:05.456315   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:05.530310   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:06.207301   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.208333   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.056314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.556910   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.350241   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.350774   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:10.351274   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.030545   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:08.043598   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:08.043654   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:08.081604   86402 cri.go:89] found id: ""
	I1104 12:11:08.081634   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.081644   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:08.081652   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:08.081712   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:08.135357   86402 cri.go:89] found id: ""
	I1104 12:11:08.135388   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.135398   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:08.135405   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:08.135470   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:08.173275   86402 cri.go:89] found id: ""
	I1104 12:11:08.173298   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.173306   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:08.173311   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:08.173371   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:08.213415   86402 cri.go:89] found id: ""
	I1104 12:11:08.213439   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.213448   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:08.213454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:08.213507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:08.244759   86402 cri.go:89] found id: ""
	I1104 12:11:08.244791   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.244802   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:08.244809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:08.244870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:08.276643   86402 cri.go:89] found id: ""
	I1104 12:11:08.276666   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.276675   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:08.276682   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:08.276751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:08.308425   86402 cri.go:89] found id: ""
	I1104 12:11:08.308451   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.308462   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:08.308469   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:08.308527   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:08.340645   86402 cri.go:89] found id: ""
	I1104 12:11:08.340675   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.340687   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:08.340698   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:08.340712   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:08.413171   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.413196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:08.413214   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:08.496208   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:08.496246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:08.534527   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:08.534560   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:08.583515   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:08.583550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:11.099000   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:11.112158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:11.112236   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:11.145718   86402 cri.go:89] found id: ""
	I1104 12:11:11.145748   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.145758   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:11.145765   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:11.145958   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:11.177270   86402 cri.go:89] found id: ""
	I1104 12:11:11.177301   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.177317   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:11.177325   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:11.177396   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:11.209696   86402 cri.go:89] found id: ""
	I1104 12:11:11.209722   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.209737   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:11.209742   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:11.209789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:11.244034   86402 cri.go:89] found id: ""
	I1104 12:11:11.244061   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.244069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:11.244078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:11.244135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:11.276437   86402 cri.go:89] found id: ""
	I1104 12:11:11.276462   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.276470   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:11.276476   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:11.276530   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:11.308954   86402 cri.go:89] found id: ""
	I1104 12:11:11.308980   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.308988   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:11.308994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:11.309057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:11.342175   86402 cri.go:89] found id: ""
	I1104 12:11:11.342199   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.342207   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:11.342211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:11.342266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:11.374810   86402 cri.go:89] found id: ""
	I1104 12:11:11.374839   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.374851   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:11.374860   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:11.374875   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:11.443638   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:11.443667   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:11.443681   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:11.526996   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:11.527031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:11.568297   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:11.568325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:11.616229   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:11.616264   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:10.707934   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.708053   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:11.055469   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:13.055645   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.057348   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.851253   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.350857   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:14.130707   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:14.143045   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:14.143116   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:14.185422   86402 cri.go:89] found id: ""
	I1104 12:11:14.185461   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.185471   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:14.185477   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:14.185525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:14.219890   86402 cri.go:89] found id: ""
	I1104 12:11:14.219918   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.219928   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:14.219938   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:14.219985   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:14.253256   86402 cri.go:89] found id: ""
	I1104 12:11:14.253286   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.253296   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:14.253304   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:14.253364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:14.286228   86402 cri.go:89] found id: ""
	I1104 12:11:14.286259   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.286271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:14.286279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:14.286342   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:14.317065   86402 cri.go:89] found id: ""
	I1104 12:11:14.317091   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.317101   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:14.317106   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:14.317168   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:14.348540   86402 cri.go:89] found id: ""
	I1104 12:11:14.348575   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.348583   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:14.348589   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:14.348647   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:14.380824   86402 cri.go:89] found id: ""
	I1104 12:11:14.380849   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.380858   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:14.380863   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:14.380924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:14.413757   86402 cri.go:89] found id: ""
	I1104 12:11:14.413785   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.413796   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:14.413806   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:14.413822   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:14.479311   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:14.479336   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:14.479349   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:14.572923   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:14.572959   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:14.620277   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:14.620359   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:14.674276   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:14.674310   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:15.208704   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.708523   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.555941   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.556233   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.351751   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.851087   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.187062   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:17.200179   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:17.200260   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:17.232208   86402 cri.go:89] found id: ""
	I1104 12:11:17.232231   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.232238   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:17.232244   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:17.232298   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:17.266224   86402 cri.go:89] found id: ""
	I1104 12:11:17.266248   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.266257   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:17.266262   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:17.266320   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:17.301909   86402 cri.go:89] found id: ""
	I1104 12:11:17.301940   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.301948   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:17.301953   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:17.302005   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:17.339493   86402 cri.go:89] found id: ""
	I1104 12:11:17.339517   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.339530   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:17.339537   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:17.339600   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:17.373879   86402 cri.go:89] found id: ""
	I1104 12:11:17.373927   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.373938   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:17.373945   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:17.373996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:17.405533   86402 cri.go:89] found id: ""
	I1104 12:11:17.405562   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.405573   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:17.405583   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:17.405645   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:17.439421   86402 cri.go:89] found id: ""
	I1104 12:11:17.439451   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.439460   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:17.439468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:17.439532   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:17.474573   86402 cri.go:89] found id: ""
	I1104 12:11:17.474602   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.474613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:17.474623   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:17.474636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:17.524497   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:17.524536   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.538421   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:17.538460   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:17.607299   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:17.607323   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:17.607337   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:17.684181   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:17.684224   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.223600   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:20.237793   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:20.237865   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:20.279656   86402 cri.go:89] found id: ""
	I1104 12:11:20.279682   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.279693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:20.279700   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:20.279767   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:20.337980   86402 cri.go:89] found id: ""
	I1104 12:11:20.338009   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.338020   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:20.338027   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:20.338087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:20.383183   86402 cri.go:89] found id: ""
	I1104 12:11:20.383217   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.383226   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:20.383231   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:20.383282   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:20.416470   86402 cri.go:89] found id: ""
	I1104 12:11:20.416495   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.416505   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:20.416512   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:20.416570   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:20.451968   86402 cri.go:89] found id: ""
	I1104 12:11:20.452000   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.452011   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:20.452017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:20.452074   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:20.484800   86402 cri.go:89] found id: ""
	I1104 12:11:20.484823   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.484831   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:20.484837   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:20.484893   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:20.516263   86402 cri.go:89] found id: ""
	I1104 12:11:20.516292   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.516300   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:20.516306   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:20.516364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:20.548616   86402 cri.go:89] found id: ""
	I1104 12:11:20.548640   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.548651   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:20.548661   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:20.548674   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:20.599338   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:20.599368   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:20.613116   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:20.613148   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:20.678898   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:20.678924   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:20.678936   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:20.757570   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:20.757606   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.206649   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.207379   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.207579   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.056670   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.555101   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.350891   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.351318   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:23.293912   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:23.307037   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:23.307110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:23.341161   86402 cri.go:89] found id: ""
	I1104 12:11:23.341186   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.341195   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:23.341200   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:23.341277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:23.373462   86402 cri.go:89] found id: ""
	I1104 12:11:23.373491   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.373503   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:23.373510   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:23.373568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:23.404439   86402 cri.go:89] found id: ""
	I1104 12:11:23.404471   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.404482   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:23.404489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:23.404548   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:23.435224   86402 cri.go:89] found id: ""
	I1104 12:11:23.435256   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.435267   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:23.435274   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:23.435336   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:23.472593   86402 cri.go:89] found id: ""
	I1104 12:11:23.472622   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.472633   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:23.472641   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:23.472693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:23.503413   86402 cri.go:89] found id: ""
	I1104 12:11:23.503438   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.503447   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:23.503454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:23.503516   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:23.537582   86402 cri.go:89] found id: ""
	I1104 12:11:23.537610   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.537621   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:23.537628   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:23.537689   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:23.573799   86402 cri.go:89] found id: ""
	I1104 12:11:23.573824   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.573831   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:23.573838   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:23.573851   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:23.649239   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:23.649273   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.686518   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:23.686548   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:23.738955   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:23.738987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:23.751909   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:23.751935   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:23.827244   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.327902   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:26.339708   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:26.339784   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:26.369615   86402 cri.go:89] found id: ""
	I1104 12:11:26.369644   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.369653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:26.369659   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:26.369715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:26.402027   86402 cri.go:89] found id: ""
	I1104 12:11:26.402056   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.402065   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:26.402070   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:26.402123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:26.433483   86402 cri.go:89] found id: ""
	I1104 12:11:26.433512   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.433523   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:26.433529   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:26.433637   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:26.466403   86402 cri.go:89] found id: ""
	I1104 12:11:26.466442   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.466453   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:26.466468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:26.466524   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:26.499818   86402 cri.go:89] found id: ""
	I1104 12:11:26.499853   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.499864   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:26.499871   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:26.499930   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:26.537782   86402 cri.go:89] found id: ""
	I1104 12:11:26.537809   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.537822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:26.537830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:26.537890   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:26.574091   86402 cri.go:89] found id: ""
	I1104 12:11:26.574120   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.574131   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:26.574138   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:26.574199   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:26.607554   86402 cri.go:89] found id: ""
	I1104 12:11:26.607584   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.607596   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:26.607606   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:26.607620   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:26.657405   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:26.657443   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:26.670022   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:26.670046   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:11:26.707958   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.207380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.556568   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:28.557276   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.852761   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.351303   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	W1104 12:11:26.736238   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.736266   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:26.736278   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:26.816277   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:26.816309   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:29.357639   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:29.371116   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:29.371204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:29.405569   86402 cri.go:89] found id: ""
	I1104 12:11:29.405595   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.405604   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:29.405611   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:29.405668   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:29.435669   86402 cri.go:89] found id: ""
	I1104 12:11:29.435697   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.435709   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:29.435716   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:29.435781   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:29.476208   86402 cri.go:89] found id: ""
	I1104 12:11:29.476236   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.476245   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:29.476251   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:29.476305   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:29.511446   86402 cri.go:89] found id: ""
	I1104 12:11:29.511474   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.511483   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:29.511489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:29.511541   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:29.543714   86402 cri.go:89] found id: ""
	I1104 12:11:29.543742   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.543754   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:29.543761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:29.543840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:29.577429   86402 cri.go:89] found id: ""
	I1104 12:11:29.577456   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.577466   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:29.577473   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:29.577534   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:29.608430   86402 cri.go:89] found id: ""
	I1104 12:11:29.608457   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.608475   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:29.608483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:29.608539   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:29.640029   86402 cri.go:89] found id: ""
	I1104 12:11:29.640057   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.640068   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:29.640078   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:29.640092   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:29.691170   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:29.691202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:29.704949   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:29.704987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:29.766856   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:29.766884   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:29.766898   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:29.847487   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:29.847525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:31.208725   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.709593   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:30.557500   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.056569   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:31.851101   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:34.350356   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:32.382925   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:32.395889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:32.395943   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:32.428711   86402 cri.go:89] found id: ""
	I1104 12:11:32.428736   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.428749   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:32.428755   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:32.428810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:32.463269   86402 cri.go:89] found id: ""
	I1104 12:11:32.463295   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.463307   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:32.463313   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:32.463372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:32.496098   86402 cri.go:89] found id: ""
	I1104 12:11:32.496125   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.496135   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:32.496142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:32.496213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:32.528729   86402 cri.go:89] found id: ""
	I1104 12:11:32.528760   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.528771   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:32.528778   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:32.528860   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:32.567290   86402 cri.go:89] found id: ""
	I1104 12:11:32.567321   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.567332   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:32.567338   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:32.567397   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:32.608932   86402 cri.go:89] found id: ""
	I1104 12:11:32.608962   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.608973   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:32.608980   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:32.609037   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:32.641128   86402 cri.go:89] found id: ""
	I1104 12:11:32.641155   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.641164   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:32.641171   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:32.641239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:32.675651   86402 cri.go:89] found id: ""
	I1104 12:11:32.675682   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.675694   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:32.675704   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:32.675719   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:32.742369   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:32.742406   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:32.742419   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:32.823371   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:32.823412   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.862243   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:32.862270   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:32.910961   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:32.910987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.425742   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:35.438553   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:35.438615   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:35.475160   86402 cri.go:89] found id: ""
	I1104 12:11:35.475189   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.475201   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:35.475209   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:35.475267   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:35.517193   86402 cri.go:89] found id: ""
	I1104 12:11:35.517239   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.517252   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:35.517260   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:35.517329   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:35.552941   86402 cri.go:89] found id: ""
	I1104 12:11:35.552967   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.552978   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:35.552985   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:35.553056   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:35.589960   86402 cri.go:89] found id: ""
	I1104 12:11:35.589983   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.589994   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:35.590001   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:35.590063   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:35.624546   86402 cri.go:89] found id: ""
	I1104 12:11:35.624575   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.624587   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:35.624595   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:35.624655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:35.657855   86402 cri.go:89] found id: ""
	I1104 12:11:35.657885   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.657896   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:35.657903   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:35.657957   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:35.691465   86402 cri.go:89] found id: ""
	I1104 12:11:35.691498   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.691509   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:35.691516   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:35.691587   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:35.727520   86402 cri.go:89] found id: ""
	I1104 12:11:35.727548   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.727558   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:35.727569   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:35.727584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:35.777876   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:35.777912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.790790   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:35.790817   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:35.856780   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:35.856805   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:35.856819   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:35.936769   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:35.936812   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:36.207096   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.707776   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:35.556694   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.055778   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:36.850946   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:39.350058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.474827   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:38.488151   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:38.488221   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:38.523010   86402 cri.go:89] found id: ""
	I1104 12:11:38.523042   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.523053   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:38.523061   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:38.523117   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:38.558065   86402 cri.go:89] found id: ""
	I1104 12:11:38.558093   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.558102   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:38.558107   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:38.558153   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:38.590676   86402 cri.go:89] found id: ""
	I1104 12:11:38.590704   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.590715   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:38.590723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:38.590780   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:38.623762   86402 cri.go:89] found id: ""
	I1104 12:11:38.623793   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.623804   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:38.623811   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:38.623870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:38.655918   86402 cri.go:89] found id: ""
	I1104 12:11:38.655947   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.655958   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:38.655966   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:38.656028   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:38.691200   86402 cri.go:89] found id: ""
	I1104 12:11:38.691228   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.691238   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:38.691245   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:38.691302   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:38.724725   86402 cri.go:89] found id: ""
	I1104 12:11:38.724748   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.724756   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:38.724761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:38.724819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:38.756333   86402 cri.go:89] found id: ""
	I1104 12:11:38.756360   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.756370   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:38.756381   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:38.756395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:38.807722   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:38.807756   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:38.821055   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:38.821079   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:38.886629   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:38.886656   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:38.886671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:38.960958   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:38.960999   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:41.503471   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:41.515994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:41.516065   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:41.549936   86402 cri.go:89] found id: ""
	I1104 12:11:41.549960   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.549968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:41.549975   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:41.550033   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:41.584565   86402 cri.go:89] found id: ""
	I1104 12:11:41.584590   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.584602   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:41.584610   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:41.584660   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:41.616427   86402 cri.go:89] found id: ""
	I1104 12:11:41.616450   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.616458   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:41.616463   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:41.616510   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:41.650835   86402 cri.go:89] found id: ""
	I1104 12:11:41.650864   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.650875   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:41.650882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:41.650946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:40.707926   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.207969   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:40.555616   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:42.555839   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:44.556749   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.351131   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.851925   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.685899   86402 cri.go:89] found id: ""
	I1104 12:11:41.685921   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.685928   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:41.685934   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:41.685979   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:41.718730   86402 cri.go:89] found id: ""
	I1104 12:11:41.718757   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.718773   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:41.718782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:41.718837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:41.748843   86402 cri.go:89] found id: ""
	I1104 12:11:41.748875   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.748887   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:41.748895   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:41.748963   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:41.780225   86402 cri.go:89] found id: ""
	I1104 12:11:41.780251   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.780260   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:41.780268   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:41.780285   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:41.830864   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:41.830893   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:41.844252   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:41.844279   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:41.908514   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:41.908542   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:41.908554   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:41.988545   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:41.988582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:44.527641   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:44.540026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:44.540108   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:44.574530   86402 cri.go:89] found id: ""
	I1104 12:11:44.574559   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.574570   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:44.574577   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:44.574638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:44.606073   86402 cri.go:89] found id: ""
	I1104 12:11:44.606103   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.606114   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:44.606121   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:44.606185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:44.639750   86402 cri.go:89] found id: ""
	I1104 12:11:44.639775   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.639784   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:44.639792   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:44.639850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:44.673528   86402 cri.go:89] found id: ""
	I1104 12:11:44.673557   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.673565   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:44.673573   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:44.673625   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:44.705928   86402 cri.go:89] found id: ""
	I1104 12:11:44.705956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.705966   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:44.705973   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:44.706032   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:44.736779   86402 cri.go:89] found id: ""
	I1104 12:11:44.736811   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.736822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:44.736830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:44.736886   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:44.769929   86402 cri.go:89] found id: ""
	I1104 12:11:44.769956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.769964   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:44.769970   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:44.770015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:44.800818   86402 cri.go:89] found id: ""
	I1104 12:11:44.800846   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.800855   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:44.800863   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:44.800873   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:44.853610   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:44.853641   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:44.866656   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:44.866683   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:44.936386   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:44.936412   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:44.936425   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:45.011789   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:45.011823   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:45.707030   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.707464   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.711329   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.557112   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.055647   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.351055   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:48.850134   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:50.851867   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.548672   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:47.563082   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:47.563157   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:47.598722   86402 cri.go:89] found id: ""
	I1104 12:11:47.598748   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.598756   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:47.598762   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:47.598809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:47.633376   86402 cri.go:89] found id: ""
	I1104 12:11:47.633412   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.633421   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:47.633428   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:47.633486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:47.666059   86402 cri.go:89] found id: ""
	I1104 12:11:47.666087   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.666095   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:47.666101   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:47.666147   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:47.700659   86402 cri.go:89] found id: ""
	I1104 12:11:47.700690   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.700704   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:47.700711   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:47.700771   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:47.732901   86402 cri.go:89] found id: ""
	I1104 12:11:47.732927   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.732934   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:47.732940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:47.732984   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:47.765371   86402 cri.go:89] found id: ""
	I1104 12:11:47.765398   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.765418   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:47.765425   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:47.765487   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:47.797043   86402 cri.go:89] found id: ""
	I1104 12:11:47.797077   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.797089   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:47.797096   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:47.797159   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:47.828140   86402 cri.go:89] found id: ""
	I1104 12:11:47.828172   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.828184   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:47.828194   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:47.828208   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:47.911398   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:47.911434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.948042   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:47.948071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:47.999603   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:47.999638   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:48.013818   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:48.013856   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:48.082679   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.583325   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:50.595272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:50.595346   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:50.630857   86402 cri.go:89] found id: ""
	I1104 12:11:50.630883   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.630892   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:50.630899   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:50.630965   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:50.663025   86402 cri.go:89] found id: ""
	I1104 12:11:50.663049   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.663058   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:50.663063   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:50.663109   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:50.695371   86402 cri.go:89] found id: ""
	I1104 12:11:50.695402   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.695413   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:50.695421   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:50.695480   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:50.728805   86402 cri.go:89] found id: ""
	I1104 12:11:50.728827   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.728836   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:50.728841   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:50.728902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:50.762837   86402 cri.go:89] found id: ""
	I1104 12:11:50.762868   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.762878   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:50.762885   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:50.762941   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:50.802531   86402 cri.go:89] found id: ""
	I1104 12:11:50.802556   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.802564   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:50.802569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:50.802613   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:50.835124   86402 cri.go:89] found id: ""
	I1104 12:11:50.835161   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.835173   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:50.835180   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:50.835234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:50.869265   86402 cri.go:89] found id: ""
	I1104 12:11:50.869295   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.869308   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:50.869318   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:50.869330   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:50.919371   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:50.919405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:50.932165   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:50.932195   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:50.993935   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.993959   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:50.993972   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:51.071816   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:51.071848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:52.208101   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:54.707467   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:51.056129   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.057025   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.349902   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.350302   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.608347   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:53.620842   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:53.620902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:53.652870   86402 cri.go:89] found id: ""
	I1104 12:11:53.652896   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.652909   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:53.652917   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:53.652980   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:53.684842   86402 cri.go:89] found id: ""
	I1104 12:11:53.684878   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.684889   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:53.684897   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:53.684956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:53.722505   86402 cri.go:89] found id: ""
	I1104 12:11:53.722531   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.722539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:53.722544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:53.722603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:53.753831   86402 cri.go:89] found id: ""
	I1104 12:11:53.753858   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.753866   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:53.753872   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:53.753918   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:53.786112   86402 cri.go:89] found id: ""
	I1104 12:11:53.786139   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.786150   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:53.786157   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:53.786218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:53.820446   86402 cri.go:89] found id: ""
	I1104 12:11:53.820472   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.820487   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:53.820493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:53.820552   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:53.855631   86402 cri.go:89] found id: ""
	I1104 12:11:53.855655   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.855665   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:53.855673   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:53.855727   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:53.887953   86402 cri.go:89] found id: ""
	I1104 12:11:53.887983   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.887994   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:53.888004   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:53.888023   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:53.954408   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:53.954430   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:53.954442   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:54.028549   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:54.028584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:54.070869   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:54.070895   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:54.123676   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:54.123715   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:56.639480   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:56.652651   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:56.652709   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:56.708211   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.555992   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:58.056271   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:57.350474   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.850830   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:56.689397   86402 cri.go:89] found id: ""
	I1104 12:11:56.689425   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.689443   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:56.689452   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:56.689517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:56.725197   86402 cri.go:89] found id: ""
	I1104 12:11:56.725234   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.725246   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:56.725254   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:56.725308   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:56.759043   86402 cri.go:89] found id: ""
	I1104 12:11:56.759073   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.759084   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:56.759090   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:56.759141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:56.792268   86402 cri.go:89] found id: ""
	I1104 12:11:56.792296   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.792307   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:56.792314   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:56.792375   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:56.823668   86402 cri.go:89] found id: ""
	I1104 12:11:56.823692   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.823702   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:56.823709   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:56.823769   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:56.861812   86402 cri.go:89] found id: ""
	I1104 12:11:56.861837   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.861845   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:56.861851   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:56.861902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:56.894037   86402 cri.go:89] found id: ""
	I1104 12:11:56.894067   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.894075   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:56.894080   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:56.894133   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:56.925603   86402 cri.go:89] found id: ""
	I1104 12:11:56.925634   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.925646   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:56.925656   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:56.925669   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:56.961504   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:56.961530   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:57.012666   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:57.012700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:57.025887   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:57.025921   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:57.097219   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:57.097257   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:57.097272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:59.671179   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:59.684642   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:59.684718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:59.721599   86402 cri.go:89] found id: ""
	I1104 12:11:59.721622   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.721631   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:59.721640   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:59.721693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:59.757423   86402 cri.go:89] found id: ""
	I1104 12:11:59.757453   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.757461   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:59.757466   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:59.757525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:59.794036   86402 cri.go:89] found id: ""
	I1104 12:11:59.794071   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.794081   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:59.794089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:59.794148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:59.830098   86402 cri.go:89] found id: ""
	I1104 12:11:59.830123   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.830134   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:59.830142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:59.830207   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:59.867791   86402 cri.go:89] found id: ""
	I1104 12:11:59.867815   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.867823   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:59.867828   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:59.867879   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:59.903579   86402 cri.go:89] found id: ""
	I1104 12:11:59.903607   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.903614   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:59.903620   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:59.903667   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:59.940955   86402 cri.go:89] found id: ""
	I1104 12:11:59.940977   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.940984   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:59.940989   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:59.941034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:59.977626   86402 cri.go:89] found id: ""
	I1104 12:11:59.977653   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.977663   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:59.977674   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:59.977687   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:00.032280   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:00.032312   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:00.045965   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:00.045991   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:00.123578   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:00.123608   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:00.123625   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:00.208309   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:00.208340   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:01.707661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.207140   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:00.555683   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:02.555797   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.556558   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851646   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851680   85759 pod_ready.go:82] duration metric: took 4m0.007179751s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:01.851691   85759 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:01.851701   85759 pod_ready.go:39] duration metric: took 4m4.052369029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:01.851721   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:01.851752   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:01.851805   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:01.891029   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:01.891056   85759 cri.go:89] found id: ""
	I1104 12:12:01.891066   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:01.891128   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.895134   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:01.895243   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:01.928058   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:01.928081   85759 cri.go:89] found id: ""
	I1104 12:12:01.928089   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:01.928134   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.931906   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:01.931974   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:01.972023   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:01.972052   85759 cri.go:89] found id: ""
	I1104 12:12:01.972062   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:01.972116   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.980811   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:01.980894   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.024013   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.024038   85759 cri.go:89] found id: ""
	I1104 12:12:02.024046   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:02.024108   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.028571   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.028641   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.063545   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:02.063570   85759 cri.go:89] found id: ""
	I1104 12:12:02.063580   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:02.063635   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.067582   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.067652   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.100954   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.100979   85759 cri.go:89] found id: ""
	I1104 12:12:02.100989   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:02.101038   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.105121   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.105182   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.137206   85759 cri.go:89] found id: ""
	I1104 12:12:02.137249   85759 logs.go:282] 0 containers: []
	W1104 12:12:02.137260   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.137268   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:02.137317   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:02.171499   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:02.171520   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.171526   85759 cri.go:89] found id: ""
	I1104 12:12:02.171535   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:02.171587   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.175646   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.179066   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:02.179084   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.249087   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:02.249126   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:02.262666   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:02.262692   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:02.316826   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:02.316856   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.351741   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:02.351766   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.400377   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:02.400409   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.448029   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:02.448059   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:02.975331   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:02.975367   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:03.026697   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.026739   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:03.140704   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:03.140753   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:03.192394   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:03.192427   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:03.236040   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:03.236071   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:03.275166   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:03.275194   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:05.813333   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.827697   85759 api_server.go:72] duration metric: took 4m15.741105379s to wait for apiserver process to appear ...
	I1104 12:12:05.827725   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:05.827763   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.827826   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.869552   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:05.869580   85759 cri.go:89] found id: ""
	I1104 12:12:05.869590   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:05.869642   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.873890   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.873954   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.914131   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:05.914153   85759 cri.go:89] found id: ""
	I1104 12:12:05.914161   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:05.914216   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.920980   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.921042   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.960930   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:05.960953   85759 cri.go:89] found id: ""
	I1104 12:12:05.960962   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:05.961018   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.965591   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.965653   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:06.000500   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:06.000520   85759 cri.go:89] found id: ""
	I1104 12:12:06.000526   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:06.000576   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.004775   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:06.004835   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:06.042011   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:06.042032   85759 cri.go:89] found id: ""
	I1104 12:12:06.042041   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:06.042102   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.047885   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:06.047952   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.084318   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:06.084341   85759 cri.go:89] found id: ""
	I1104 12:12:06.084349   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:06.084410   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.088487   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.088564   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.127693   85759 cri.go:89] found id: ""
	I1104 12:12:06.127721   85759 logs.go:282] 0 containers: []
	W1104 12:12:06.127730   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.127736   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:06.127780   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:06.165173   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.165199   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.165206   85759 cri.go:89] found id: ""
	I1104 12:12:06.165215   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:06.165302   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.169479   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.173154   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.173177   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.746303   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:02.758892   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:02.758967   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:02.792775   86402 cri.go:89] found id: ""
	I1104 12:12:02.792803   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.792815   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:02.792822   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:02.792878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:02.831073   86402 cri.go:89] found id: ""
	I1104 12:12:02.831097   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.831108   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:02.831115   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:02.831174   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:02.863530   86402 cri.go:89] found id: ""
	I1104 12:12:02.863557   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.863568   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:02.863574   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:02.863641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.894894   86402 cri.go:89] found id: ""
	I1104 12:12:02.894924   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.894934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:02.894942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.894996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.930052   86402 cri.go:89] found id: ""
	I1104 12:12:02.930081   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.930092   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:02.930100   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.930160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.964503   86402 cri.go:89] found id: ""
	I1104 12:12:02.964532   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.964544   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:02.964551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.964610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.998065   86402 cri.go:89] found id: ""
	I1104 12:12:02.998088   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.998096   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.998102   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:02.998148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:03.033579   86402 cri.go:89] found id: ""
	I1104 12:12:03.033604   86402 logs.go:282] 0 containers: []
	W1104 12:12:03.033613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:03.033621   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:03.033630   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:03.086215   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:03.086249   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:03.100100   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.100136   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:03.168116   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:03.168150   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:03.168165   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:03.253608   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:03.253642   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:05.792913   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.806494   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.806568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.854379   86402 cri.go:89] found id: ""
	I1104 12:12:05.854406   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.854417   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:05.854425   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.854503   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.886144   86402 cri.go:89] found id: ""
	I1104 12:12:05.886169   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.886179   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:05.886186   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.886248   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.917462   86402 cri.go:89] found id: ""
	I1104 12:12:05.917482   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.917492   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:05.917499   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.917550   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:05.954065   86402 cri.go:89] found id: ""
	I1104 12:12:05.954099   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.954110   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:05.954120   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:05.954194   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:05.990935   86402 cri.go:89] found id: ""
	I1104 12:12:05.990966   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.990977   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:05.990984   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:05.991050   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.032175   86402 cri.go:89] found id: ""
	I1104 12:12:06.032198   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.032206   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:06.032211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.032269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.069215   86402 cri.go:89] found id: ""
	I1104 12:12:06.069262   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.069275   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.069282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:06.069340   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:06.103065   86402 cri.go:89] found id: ""
	I1104 12:12:06.103106   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.103117   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:06.103127   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.103145   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:06.184111   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:06.184135   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.184149   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.272720   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.272760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.315596   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.315636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:06.376054   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.376110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.214237   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:08.707098   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:07.056531   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:09.056763   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:06.252295   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:06.252326   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:06.302739   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:06.302769   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:06.361279   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.361307   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.811335   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.811380   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.851356   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:06.851387   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.888753   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:06.888789   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.922406   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.922438   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.935028   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.935057   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:07.033975   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:07.034019   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:07.068680   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:07.068706   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:07.105150   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:07.105182   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:07.139258   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:07.139290   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.695630   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:12:09.701156   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:12:09.702414   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:09.702441   85759 api_server.go:131] duration metric: took 3.874707524s to wait for apiserver health ...
	I1104 12:12:09.702451   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:09.702475   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:09.702530   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:09.736867   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:09.736891   85759 cri.go:89] found id: ""
	I1104 12:12:09.736901   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:09.736956   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.741108   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:09.741176   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:09.780460   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:09.780483   85759 cri.go:89] found id: ""
	I1104 12:12:09.780490   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:09.780535   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.784698   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:09.784756   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:09.823042   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:09.823059   85759 cri.go:89] found id: ""
	I1104 12:12:09.823068   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:09.823123   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.826750   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:09.826803   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.859148   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:09.859175   85759 cri.go:89] found id: ""
	I1104 12:12:09.859185   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:09.859245   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.863676   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.863739   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.901737   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:09.901766   85759 cri.go:89] found id: ""
	I1104 12:12:09.901783   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:09.901843   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.905931   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.905993   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.942617   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.942637   85759 cri.go:89] found id: ""
	I1104 12:12:09.942644   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:09.942687   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.946420   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.946481   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.984891   85759 cri.go:89] found id: ""
	I1104 12:12:09.984921   85759 logs.go:282] 0 containers: []
	W1104 12:12:09.984932   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.984939   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:09.985000   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:10.018332   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.018357   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.018363   85759 cri.go:89] found id: ""
	I1104 12:12:10.018374   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:10.018434   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.022995   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.026853   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:10.026878   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:10.083384   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:10.083421   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:10.136576   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:10.136608   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.182808   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:10.182837   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.217017   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:10.217047   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:10.598972   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:10.599010   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:10.638827   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:10.638868   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:10.652880   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:10.652923   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:10.700645   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:10.700675   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:10.734860   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:10.734890   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:10.774613   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:10.774647   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:10.808375   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:10.808403   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:10.876130   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:10.876165   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:08.890463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:08.904272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:08.904354   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:08.935677   86402 cri.go:89] found id: ""
	I1104 12:12:08.935701   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.935710   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:08.935715   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:08.935761   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:08.966969   86402 cri.go:89] found id: ""
	I1104 12:12:08.966993   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.967004   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:08.967011   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:08.967072   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:08.998753   86402 cri.go:89] found id: ""
	I1104 12:12:08.998778   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.998786   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:08.998790   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:08.998852   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.031901   86402 cri.go:89] found id: ""
	I1104 12:12:09.031925   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.031934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:09.031940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.032000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.071478   86402 cri.go:89] found id: ""
	I1104 12:12:09.071500   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.071508   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:09.071513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.071564   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.107593   86402 cri.go:89] found id: ""
	I1104 12:12:09.107621   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.107629   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:09.107635   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.107693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.140899   86402 cri.go:89] found id: ""
	I1104 12:12:09.140923   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.140934   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.140942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:09.141000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:09.174279   86402 cri.go:89] found id: ""
	I1104 12:12:09.174307   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.174318   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:09.174330   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:09.174405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:09.226340   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:09.226371   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:09.239573   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:09.239600   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:09.306180   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:09.306201   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:09.306212   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:09.385039   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:09.385072   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:13.475909   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:13.475946   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.475954   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.475960   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.475965   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.475970   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.475975   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.475985   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.475994   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.476008   85759 system_pods.go:74] duration metric: took 3.773548162s to wait for pod list to return data ...
	I1104 12:12:13.476020   85759 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:13.478598   85759 default_sa.go:45] found service account: "default"
	I1104 12:12:13.478618   85759 default_sa.go:55] duration metric: took 2.591186ms for default service account to be created ...
	I1104 12:12:13.478628   85759 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:13.483285   85759 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:13.483308   85759 system_pods.go:89] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.483314   85759 system_pods.go:89] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.483318   85759 system_pods.go:89] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.483322   85759 system_pods.go:89] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.483325   85759 system_pods.go:89] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.483329   85759 system_pods.go:89] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.483336   85759 system_pods.go:89] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.483340   85759 system_pods.go:89] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.483347   85759 system_pods.go:126] duration metric: took 4.713256ms to wait for k8s-apps to be running ...
	I1104 12:12:13.483355   85759 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:13.483398   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:13.497748   85759 system_svc.go:56] duration metric: took 14.381722ms WaitForService to wait for kubelet
	I1104 12:12:13.497812   85759 kubeadm.go:582] duration metric: took 4m23.411218278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:13.497843   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:13.500813   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:13.500833   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:13.500843   85759 node_conditions.go:105] duration metric: took 2.993771ms to run NodePressure ...
	I1104 12:12:13.500854   85759 start.go:241] waiting for startup goroutines ...
	I1104 12:12:13.500860   85759 start.go:246] waiting for cluster config update ...
	I1104 12:12:13.500870   85759 start.go:255] writing updated cluster config ...
	I1104 12:12:13.501122   85759 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:13.548293   85759 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:13.550203   85759 out.go:177] * Done! kubectl is now configured to use "embed-certs-325116" cluster and "default" namespace by default
	I1104 12:12:10.707746   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:12.708477   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.555266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:13.555498   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.924105   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:11.936623   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:11.936685   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:11.968026   86402 cri.go:89] found id: ""
	I1104 12:12:11.968056   86402 logs.go:282] 0 containers: []
	W1104 12:12:11.968067   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:11.968074   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:11.968139   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:12.001193   86402 cri.go:89] found id: ""
	I1104 12:12:12.001218   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.001245   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:12.001252   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:12.001311   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:12.035167   86402 cri.go:89] found id: ""
	I1104 12:12:12.035190   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.035199   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:12.035204   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:12.035250   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:12.068412   86402 cri.go:89] found id: ""
	I1104 12:12:12.068440   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.068450   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:12.068458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:12.068515   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:12.099965   86402 cri.go:89] found id: ""
	I1104 12:12:12.099991   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.100002   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:12.100009   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:12.100066   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:12.133413   86402 cri.go:89] found id: ""
	I1104 12:12:12.133442   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.133453   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:12.133460   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:12.133520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:12.169007   86402 cri.go:89] found id: ""
	I1104 12:12:12.169036   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.169046   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:12.169053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:12.169112   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:12.200592   86402 cri.go:89] found id: ""
	I1104 12:12:12.200621   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.200635   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:12.200643   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:12.200657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:12.244609   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:12.244644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:12.299770   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:12.299804   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:12.324354   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:12.324395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:12.385605   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:12.385632   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:12.385661   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:14.964867   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:14.977918   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:14.977991   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:15.012865   86402 cri.go:89] found id: ""
	I1104 12:12:15.012894   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.012906   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:15.012913   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:15.012977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:15.046548   86402 cri.go:89] found id: ""
	I1104 12:12:15.046574   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.046583   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:15.046589   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:15.046636   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:15.079310   86402 cri.go:89] found id: ""
	I1104 12:12:15.079336   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.079347   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:15.079353   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:15.079412   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:15.110595   86402 cri.go:89] found id: ""
	I1104 12:12:15.110625   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.110636   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:15.110648   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:15.110716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:15.143362   86402 cri.go:89] found id: ""
	I1104 12:12:15.143391   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.143403   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:15.143410   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:15.143533   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:15.173973   86402 cri.go:89] found id: ""
	I1104 12:12:15.174000   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.174009   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:15.174017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:15.174081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:15.205021   86402 cri.go:89] found id: ""
	I1104 12:12:15.205049   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.205060   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:15.205067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:15.205113   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:15.240190   86402 cri.go:89] found id: ""
	I1104 12:12:15.240220   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.240231   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:15.240249   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:15.240263   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:15.290208   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:15.290241   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:15.305216   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:15.305258   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:15.375713   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:15.375735   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:15.375746   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:15.456517   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:15.456552   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:15.209380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:17.708299   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:16.056359   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:18.556166   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.050834   86301 pod_ready.go:82] duration metric: took 4m0.001048639s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:20.050863   86301 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:20.050874   86301 pod_ready.go:39] duration metric: took 4m5.585310983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:20.050889   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:20.050919   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:20.050968   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:20.088440   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.088466   86301 cri.go:89] found id: ""
	I1104 12:12:20.088476   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:20.088523   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.092502   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:20.092575   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:20.126599   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:20.126621   86301 cri.go:89] found id: ""
	I1104 12:12:20.126629   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:20.126687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.130617   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:20.130686   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:20.169664   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.169687   86301 cri.go:89] found id: ""
	I1104 12:12:20.169696   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:20.169750   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.173881   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:20.173920   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:20.209271   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.209292   86301 cri.go:89] found id: ""
	I1104 12:12:20.209299   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:20.209354   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.214187   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:20.214254   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:20.248683   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.248702   86301 cri.go:89] found id: ""
	I1104 12:12:20.248709   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:20.248757   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.252501   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:20.252574   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:20.286367   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:20.286406   86301 cri.go:89] found id: ""
	I1104 12:12:20.286415   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:20.286491   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:17.992855   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:18.011370   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:18.011446   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:18.054937   86402 cri.go:89] found id: ""
	I1104 12:12:18.054961   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.054968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:18.054974   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:18.055026   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:18.107769   86402 cri.go:89] found id: ""
	I1104 12:12:18.107802   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.107814   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:18.107821   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:18.107887   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:18.141932   86402 cri.go:89] found id: ""
	I1104 12:12:18.141959   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.141968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:18.141974   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:18.142021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:18.174322   86402 cri.go:89] found id: ""
	I1104 12:12:18.174345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.174353   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:18.174361   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:18.174514   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:18.206742   86402 cri.go:89] found id: ""
	I1104 12:12:18.206766   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.206776   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:18.206782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:18.206840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:18.240322   86402 cri.go:89] found id: ""
	I1104 12:12:18.240345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.240358   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:18.240363   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:18.240420   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:18.272081   86402 cri.go:89] found id: ""
	I1104 12:12:18.272110   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.272121   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:18.272128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:18.272211   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:18.308604   86402 cri.go:89] found id: ""
	I1104 12:12:18.308629   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.308637   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:18.308646   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:18.308655   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:18.392854   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:18.392892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:18.429632   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:18.429665   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:18.481082   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:18.481120   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:18.494730   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:18.494758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:18.562098   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.063223   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:21.075655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:21.075714   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:21.117762   86402 cri.go:89] found id: ""
	I1104 12:12:21.117794   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.117807   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:21.117817   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:21.117881   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:21.153256   86402 cri.go:89] found id: ""
	I1104 12:12:21.153281   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.153289   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:21.153295   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:21.153355   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:21.191477   86402 cri.go:89] found id: ""
	I1104 12:12:21.191519   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.191539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:21.191547   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:21.191618   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:21.228378   86402 cri.go:89] found id: ""
	I1104 12:12:21.228411   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.228424   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:21.228431   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:21.228495   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:21.265452   86402 cri.go:89] found id: ""
	I1104 12:12:21.265483   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.265493   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:21.265501   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:21.265561   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:21.301073   86402 cri.go:89] found id: ""
	I1104 12:12:21.301099   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.301108   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:21.301114   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:21.301182   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:21.337952   86402 cri.go:89] found id: ""
	I1104 12:12:21.337977   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.337986   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:21.337996   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:21.338053   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:21.371895   86402 cri.go:89] found id: ""
	I1104 12:12:21.371920   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.371929   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:21.371937   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:21.371950   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:21.429757   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:21.429789   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:21.444365   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.444418   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:21.510971   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.510990   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:21.511002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.593605   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:21.593639   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.208004   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:22.706901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:24.708795   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.290832   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:20.290885   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:20.324359   86301 cri.go:89] found id: ""
	I1104 12:12:20.324383   86301 logs.go:282] 0 containers: []
	W1104 12:12:20.324391   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:20.324397   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:20.324442   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:20.364466   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.364488   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:20.364492   86301 cri.go:89] found id: ""
	I1104 12:12:20.364500   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:20.364557   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.368440   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.371967   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:20.371991   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.405547   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:20.405572   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.446936   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:20.446962   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.485811   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:20.485838   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.530775   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:20.530803   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:20.599495   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:20.599542   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:20.614511   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:20.614543   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.659277   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:20.659316   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.694675   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:20.694707   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.187670   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.187705   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:21.308477   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:21.308501   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:21.365526   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:21.365562   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:21.431350   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:21.431381   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:23.969966   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:23.984866   86301 api_server.go:72] duration metric: took 4m16.75797908s to wait for apiserver process to appear ...
	I1104 12:12:23.984895   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:23.984937   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:23.984989   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:24.022326   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.022348   86301 cri.go:89] found id: ""
	I1104 12:12:24.022357   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:24.022428   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.027288   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:24.027377   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:24.064963   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.064986   86301 cri.go:89] found id: ""
	I1104 12:12:24.064993   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:24.065045   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.072027   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:24.072089   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:24.106618   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.106648   86301 cri.go:89] found id: ""
	I1104 12:12:24.106659   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:24.106719   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.110696   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:24.110762   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:24.148575   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:24.148600   86301 cri.go:89] found id: ""
	I1104 12:12:24.148621   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:24.148687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.152673   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:24.152741   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:24.187739   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:24.187763   86301 cri.go:89] found id: ""
	I1104 12:12:24.187771   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:24.187817   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.191551   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:24.191610   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:24.229634   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.229656   86301 cri.go:89] found id: ""
	I1104 12:12:24.229667   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:24.229720   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.234342   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:24.234426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:24.268339   86301 cri.go:89] found id: ""
	I1104 12:12:24.268363   86301 logs.go:282] 0 containers: []
	W1104 12:12:24.268370   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:24.268375   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:24.268426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:24.302347   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.302369   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.302374   86301 cri.go:89] found id: ""
	I1104 12:12:24.302382   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:24.302446   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.306761   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.310867   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:24.310888   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.353396   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:24.353421   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.408025   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:24.408054   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.446150   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:24.446177   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:24.495479   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:24.495505   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:24.568973   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:24.569008   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:24.585522   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:24.585552   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.630483   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:24.630516   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.675828   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:24.675865   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:25.094412   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:25.094457   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:25.191547   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:25.191576   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:25.227482   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:25.227509   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:25.261150   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:25.261184   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.130961   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:24.143387   86402 kubeadm.go:597] duration metric: took 4m4.25221988s to restartPrimaryControlPlane
	W1104 12:12:24.143472   86402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1104 12:12:24.143499   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:12:27.207964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:29.208705   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:27.799329   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:12:27.803543   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:12:27.804545   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:27.804568   86301 api_server.go:131] duration metric: took 3.819666619s to wait for apiserver health ...
	I1104 12:12:27.804576   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:27.804596   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:27.804639   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:27.842317   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:27.842339   86301 cri.go:89] found id: ""
	I1104 12:12:27.842348   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:27.842403   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.846107   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:27.846167   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:27.878833   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:27.878854   86301 cri.go:89] found id: ""
	I1104 12:12:27.878864   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:27.878923   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.882562   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:27.882614   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:27.914077   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:27.914098   86301 cri.go:89] found id: ""
	I1104 12:12:27.914106   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:27.914150   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.917756   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:27.917807   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:27.949534   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:27.949555   86301 cri.go:89] found id: ""
	I1104 12:12:27.949562   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:27.949606   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.953176   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:27.953235   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:27.984491   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:27.984509   86301 cri.go:89] found id: ""
	I1104 12:12:27.984516   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:27.984566   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.988283   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:27.988342   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:28.022752   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.022775   86301 cri.go:89] found id: ""
	I1104 12:12:28.022783   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:28.022829   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.026702   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:28.026767   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:28.062501   86301 cri.go:89] found id: ""
	I1104 12:12:28.062534   86301 logs.go:282] 0 containers: []
	W1104 12:12:28.062545   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:28.062556   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:28.062608   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:28.097167   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.097195   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.097201   86301 cri.go:89] found id: ""
	I1104 12:12:28.097211   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:28.097276   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.101192   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.104712   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:28.104731   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:28.118886   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:28.118911   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:28.220480   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:28.220512   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:28.264205   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:28.264239   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:28.299241   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:28.299274   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:28.339817   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:28.339847   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:28.377987   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:28.378014   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:28.416746   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:28.416772   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:28.484743   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:28.484777   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:28.532089   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:28.532128   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.589039   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:28.589072   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.623955   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:28.623987   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.657953   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:28.657986   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:31.547595   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:31.547624   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.547629   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.547633   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.547637   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.547640   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.547643   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.547649   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.547653   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.547661   86301 system_pods.go:74] duration metric: took 3.743079115s to wait for pod list to return data ...
	I1104 12:12:31.547667   86301 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:31.550088   86301 default_sa.go:45] found service account: "default"
	I1104 12:12:31.550108   86301 default_sa.go:55] duration metric: took 2.435317ms for default service account to be created ...
	I1104 12:12:31.550114   86301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:31.554898   86301 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:31.554924   86301 system_pods.go:89] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.554929   86301 system_pods.go:89] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.554933   86301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.554937   86301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.554941   86301 system_pods.go:89] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.554945   86301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.554952   86301 system_pods.go:89] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.554955   86301 system_pods.go:89] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.554962   86301 system_pods.go:126] duration metric: took 4.842911ms to wait for k8s-apps to be running ...
	I1104 12:12:31.554968   86301 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:31.555008   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:31.568927   86301 system_svc.go:56] duration metric: took 13.948557ms WaitForService to wait for kubelet
	I1104 12:12:31.568958   86301 kubeadm.go:582] duration metric: took 4m24.342075873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:31.568987   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:31.571962   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:31.571983   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:31.571993   86301 node_conditions.go:105] duration metric: took 3.000591ms to run NodePressure ...
	I1104 12:12:31.572004   86301 start.go:241] waiting for startup goroutines ...
	I1104 12:12:31.572010   86301 start.go:246] waiting for cluster config update ...
	I1104 12:12:31.572019   86301 start.go:255] writing updated cluster config ...
	I1104 12:12:31.572277   86301 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:31.620935   86301 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:31.623672   86301 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-036892" cluster and "default" namespace by default
	I1104 12:12:28.876306   86402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.732783523s)
	I1104 12:12:28.876377   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:28.890455   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:12:28.899660   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:12:28.908658   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:12:28.908675   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:12:28.908715   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:12:28.916955   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:12:28.917013   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:12:28.927198   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:12:28.936868   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:12:28.936924   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:12:28.947246   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.956962   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:12:28.957015   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.967293   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:12:28.976975   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:12:28.977030   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:12:28.988547   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:12:29.198333   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:12:31.709511   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:34.207341   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:36.707962   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:39.208138   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:41.208806   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:43.707896   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:46.207316   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:48.707107   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:50.707644   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:52.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:54.708517   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:57.206564   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:59.207122   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:01.207195   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:03.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:05.707763   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:07.708314   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:09.708374   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:10.702085   85500 pod_ready.go:82] duration metric: took 4m0.000587313s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:13:10.702115   85500 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:13:10.702126   85500 pod_ready.go:39] duration metric: took 4m5.542549912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:13:10.702144   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:13:10.702191   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:10.702246   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:10.743079   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:10.743102   85500 cri.go:89] found id: ""
	I1104 12:13:10.743110   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:10.743176   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.747213   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:10.747275   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:10.781435   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:10.781465   85500 cri.go:89] found id: ""
	I1104 12:13:10.781474   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:10.781597   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.785383   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:10.785453   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:10.825927   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:10.825956   85500 cri.go:89] found id: ""
	I1104 12:13:10.825965   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:10.826023   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.829834   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:10.829899   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:10.872447   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:10.872468   85500 cri.go:89] found id: ""
	I1104 12:13:10.872475   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:10.872524   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.876428   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:10.876483   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:10.911092   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:10.911125   85500 cri.go:89] found id: ""
	I1104 12:13:10.911134   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:10.911190   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.915021   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:10.915076   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:10.950838   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:10.950863   85500 cri.go:89] found id: ""
	I1104 12:13:10.950873   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:10.950935   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.954889   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:10.954938   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:10.991580   85500 cri.go:89] found id: ""
	I1104 12:13:10.991609   85500 logs.go:282] 0 containers: []
	W1104 12:13:10.991618   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:10.991625   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:10.991689   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:11.031428   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.031469   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.031474   85500 cri.go:89] found id: ""
	I1104 12:13:11.031484   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:11.031557   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.035810   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.039555   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:11.039582   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:11.076837   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:11.076865   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:11.114534   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:11.114561   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:11.148897   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:11.148935   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.184480   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:11.184511   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:11.256197   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:11.256237   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:11.368984   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:11.369014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:11.414219   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:11.414253   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:11.455746   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:11.455776   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.491699   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:11.491726   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:11.962368   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:11.962400   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:11.975564   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:11.975590   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:12.031427   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:12.031461   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:14.572933   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:13:14.588140   85500 api_server.go:72] duration metric: took 4m17.141131339s to wait for apiserver process to appear ...
	I1104 12:13:14.588168   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:13:14.588196   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:14.588243   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:14.621509   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:14.621534   85500 cri.go:89] found id: ""
	I1104 12:13:14.621543   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:14.621601   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.626328   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:14.626384   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:14.662052   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:14.662079   85500 cri.go:89] found id: ""
	I1104 12:13:14.662115   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:14.662174   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.666018   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:14.666089   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:14.702872   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:14.702897   85500 cri.go:89] found id: ""
	I1104 12:13:14.702910   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:14.702968   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.706809   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:14.706883   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:14.744985   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:14.745005   85500 cri.go:89] found id: ""
	I1104 12:13:14.745012   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:14.745058   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.749441   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:14.749497   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:14.781617   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:14.781644   85500 cri.go:89] found id: ""
	I1104 12:13:14.781653   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:14.781709   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.785971   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:14.786046   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:14.819002   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:14.819029   85500 cri.go:89] found id: ""
	I1104 12:13:14.819038   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:14.819101   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.823075   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:14.823143   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:14.858936   85500 cri.go:89] found id: ""
	I1104 12:13:14.858965   85500 logs.go:282] 0 containers: []
	W1104 12:13:14.858977   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:14.858984   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:14.859048   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:14.898303   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:14.898327   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:14.898333   85500 cri.go:89] found id: ""
	I1104 12:13:14.898341   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:14.898402   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.902325   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.905855   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:14.905880   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:14.973356   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:14.973389   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:14.988655   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:14.988696   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:15.023407   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:15.023443   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:15.078974   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:15.079007   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:15.114147   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:15.114180   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:15.559434   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:15.559477   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:15.666481   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:15.666509   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:15.728066   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:15.728101   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:15.769721   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:15.769759   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:15.802131   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:15.802170   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:15.837613   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:15.837639   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:15.874374   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:15.874407   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:18.413199   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:13:18.418522   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:13:18.419487   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:13:18.419512   85500 api_server.go:131] duration metric: took 3.831337085s to wait for apiserver health ...
	I1104 12:13:18.419521   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:13:18.419549   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:18.419605   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:18.453835   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:18.453856   85500 cri.go:89] found id: ""
	I1104 12:13:18.453865   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:18.453927   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.458136   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:18.458198   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:18.496587   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:18.496623   85500 cri.go:89] found id: ""
	I1104 12:13:18.496634   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:18.496691   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.500451   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:18.500523   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:18.532756   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:18.532785   85500 cri.go:89] found id: ""
	I1104 12:13:18.532795   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:18.532857   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.537239   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:18.537293   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:18.569348   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:18.569374   85500 cri.go:89] found id: ""
	I1104 12:13:18.569382   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:18.569440   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.573491   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:18.573563   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:18.606857   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:18.606886   85500 cri.go:89] found id: ""
	I1104 12:13:18.606896   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:18.606951   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.611158   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:18.611229   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:18.645448   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:18.645467   85500 cri.go:89] found id: ""
	I1104 12:13:18.645474   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:18.645527   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.649014   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:18.649062   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:18.693641   85500 cri.go:89] found id: ""
	I1104 12:13:18.693668   85500 logs.go:282] 0 containers: []
	W1104 12:13:18.693676   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:18.693681   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:18.693728   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:18.733668   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:18.733690   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:18.733695   85500 cri.go:89] found id: ""
	I1104 12:13:18.733702   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:18.733745   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.737419   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.740993   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:18.741014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:19.135942   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:19.135980   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:19.206586   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:19.206623   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:19.222135   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:19.222164   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:19.262746   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:19.262774   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:19.298259   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:19.298287   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:19.338304   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:19.338332   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:19.375163   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:19.375195   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:19.478206   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:19.478234   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:19.526261   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:19.526291   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:19.559922   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:19.559954   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:19.609848   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:19.609879   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:19.648804   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:19.648829   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:22.210690   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:13:22.210718   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.210723   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.210727   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.210730   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.210733   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.210737   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.210752   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.210758   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.210768   85500 system_pods.go:74] duration metric: took 3.791240483s to wait for pod list to return data ...
	I1104 12:13:22.210780   85500 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:13:22.213688   85500 default_sa.go:45] found service account: "default"
	I1104 12:13:22.213709   85500 default_sa.go:55] duration metric: took 2.921691ms for default service account to be created ...
	I1104 12:13:22.213717   85500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:13:22.219436   85500 system_pods.go:86] 8 kube-system pods found
	I1104 12:13:22.219466   85500 system_pods.go:89] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.219475   85500 system_pods.go:89] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.219480   85500 system_pods.go:89] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.219489   85500 system_pods.go:89] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.219495   85500 system_pods.go:89] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.219501   85500 system_pods.go:89] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.219512   85500 system_pods.go:89] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.219523   85500 system_pods.go:89] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.219537   85500 system_pods.go:126] duration metric: took 5.813462ms to wait for k8s-apps to be running ...
	I1104 12:13:22.219551   85500 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:13:22.219612   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:13:22.232887   85500 system_svc.go:56] duration metric: took 13.328078ms WaitForService to wait for kubelet
	I1104 12:13:22.232918   85500 kubeadm.go:582] duration metric: took 4m24.785911082s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:13:22.232941   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:13:22.235641   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:13:22.235662   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:13:22.235675   85500 node_conditions.go:105] duration metric: took 2.728232ms to run NodePressure ...
	I1104 12:13:22.235687   85500 start.go:241] waiting for startup goroutines ...
	I1104 12:13:22.235695   85500 start.go:246] waiting for cluster config update ...
	I1104 12:13:22.235707   85500 start.go:255] writing updated cluster config ...
	I1104 12:13:22.235962   85500 ssh_runner.go:195] Run: rm -f paused
	I1104 12:13:22.284583   85500 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:13:22.287448   85500 out.go:177] * Done! kubectl is now configured to use "no-preload-908370" cluster and "default" namespace by default
	I1104 12:14:25.090113   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:14:25.090254   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:14:25.091997   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.092065   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.092204   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.092341   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.092480   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:25.092569   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:25.094485   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:25.094582   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:25.094664   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:25.094799   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:25.094891   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:25.095003   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:25.095086   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:25.095186   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:25.095240   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:25.095319   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:25.095403   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:25.095481   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:25.095554   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:25.095614   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:25.095676   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:25.095752   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:25.095828   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:25.095970   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:25.096102   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:25.096169   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:25.096262   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:25.097799   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:25.097920   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:25.098018   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:25.098126   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:25.098211   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:25.098333   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:14:25.098393   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:14:25.098487   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098633   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.098690   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098940   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099074   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099307   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099370   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099528   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099582   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099740   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099758   86402 kubeadm.go:310] 
	I1104 12:14:25.099815   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:14:25.099880   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:14:25.099889   86402 kubeadm.go:310] 
	I1104 12:14:25.099923   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:14:25.099952   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:14:25.100036   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:14:25.100044   86402 kubeadm.go:310] 
	I1104 12:14:25.100197   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:14:25.100237   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:14:25.100267   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:14:25.100273   86402 kubeadm.go:310] 
	I1104 12:14:25.100367   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:14:25.100454   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:14:25.100468   86402 kubeadm.go:310] 
	I1104 12:14:25.100600   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:14:25.100718   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:14:25.100821   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:14:25.100903   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:14:25.100970   86402 kubeadm.go:310] 
	W1104 12:14:25.101033   86402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:14:25.101071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:14:25.536184   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:14:25.550453   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:14:25.560308   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:14:25.560327   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:14:25.560368   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:14:25.569106   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:14:25.569189   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:14:25.578395   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:14:25.587402   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:14:25.587473   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:14:25.596827   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.605359   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:14:25.605420   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.614266   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:14:25.622522   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:14:25.622582   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:14:25.631876   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:14:25.701080   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.701168   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.833997   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.834138   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.834258   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:26.009165   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:26.011976   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:26.012090   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:26.012183   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:26.012333   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:26.012422   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:26.012532   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:26.012619   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:26.012689   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:26.012748   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:26.012851   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:26.012978   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:26.013025   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:26.013102   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:26.399153   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:26.470449   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:27.078991   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:27.181622   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:27.205149   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:27.205300   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:27.205383   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:27.355614   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:27.357678   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:27.357840   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:27.363942   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:27.365004   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:27.367237   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:27.368087   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:15:07.369845   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:15:07.370222   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:07.370464   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:12.370802   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:12.371041   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:22.371417   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:22.371584   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:42.371725   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:42.371932   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.370871   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:16:22.371150   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.371181   86402 kubeadm.go:310] 
	I1104 12:16:22.371222   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:16:22.371297   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:16:22.371309   86402 kubeadm.go:310] 
	I1104 12:16:22.371371   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:16:22.371435   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:16:22.371576   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:16:22.371588   86402 kubeadm.go:310] 
	I1104 12:16:22.371726   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:16:22.371784   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:16:22.371863   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:16:22.371879   86402 kubeadm.go:310] 
	I1104 12:16:22.372004   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:16:22.372155   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:16:22.372172   86402 kubeadm.go:310] 
	I1104 12:16:22.372338   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:16:22.372435   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:16:22.372566   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:16:22.372680   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:16:22.372718   86402 kubeadm.go:310] 
	I1104 12:16:22.372948   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:16:22.373110   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:16:22.373289   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:16:22.373328   86402 kubeadm.go:394] duration metric: took 8m2.53443537s to StartCluster
	I1104 12:16:22.373379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:16:22.373431   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:16:22.410373   86402 cri.go:89] found id: ""
	I1104 12:16:22.410409   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.410418   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:16:22.410424   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:16:22.410485   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:16:22.447939   86402 cri.go:89] found id: ""
	I1104 12:16:22.447963   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.447971   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:16:22.447977   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:16:22.448021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:16:22.479234   86402 cri.go:89] found id: ""
	I1104 12:16:22.479263   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.479274   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:16:22.479280   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:16:22.479341   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:16:22.512783   86402 cri.go:89] found id: ""
	I1104 12:16:22.512814   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.512825   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:16:22.512832   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:16:22.512895   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:16:22.549483   86402 cri.go:89] found id: ""
	I1104 12:16:22.549510   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.549520   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:16:22.549527   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:16:22.549593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:16:22.582339   86402 cri.go:89] found id: ""
	I1104 12:16:22.582382   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.582393   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:16:22.582402   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:16:22.582471   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:16:22.613545   86402 cri.go:89] found id: ""
	I1104 12:16:22.613574   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.613585   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:16:22.613593   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:16:22.613656   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:16:22.644488   86402 cri.go:89] found id: ""
	I1104 12:16:22.644517   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.644528   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:16:22.644539   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:16:22.644551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:16:22.681138   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:16:22.681169   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:16:22.734551   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:16:22.734586   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:16:22.750140   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:16:22.750178   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:16:22.837631   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:16:22.837657   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:16:22.837673   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1104 12:16:22.961154   86402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:16:22.961221   86402 out.go:270] * 
	W1104 12:16:22.961295   86402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.961310   86402 out.go:270] * 
	W1104 12:16:22.962053   86402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:16:22.965021   86402 out.go:201] 
	W1104 12:16:22.966262   86402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.966326   86402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:16:22.966377   86402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:16:22.967953   86402 out.go:201] 
	
	
	==> CRI-O <==
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.639150616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722893639130935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cce1dd4e-2fbb-4f07-94db-0dc9f998266f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.639693209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=048d8fb5-30e3-47cf-b515-64c69da758bd name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.639748076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=048d8fb5-30e3-47cf-b515-64c69da758bd name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.639951429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=048d8fb5-30e3-47cf-b515-64c69da758bd name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.676589961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbdb24ab-f3bc-4daf-b767-6f26ea78aa64 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.676675178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbdb24ab-f3bc-4daf-b767-6f26ea78aa64 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.677532303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9e35101-71cf-4f03-8d57-c44830513596 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.678026820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722893677897278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9e35101-71cf-4f03-8d57-c44830513596 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.678512732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61268fb2-63f1-4853-8217-4c39efe9c601 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.678591274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61268fb2-63f1-4853-8217-4c39efe9c601 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.678768219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61268fb2-63f1-4853-8217-4c39efe9c601 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.712093882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9741c30-cca6-4ed1-a677-04cf52bd71c6 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.712163971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9741c30-cca6-4ed1-a677-04cf52bd71c6 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.713153151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af28fb13-fa80-4d60-9647-1495ce962d6f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.713568602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722893713546492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af28fb13-fa80-4d60-9647-1495ce962d6f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.714120443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2deb23ad-a230-48ea-b5b4-51f821315240 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.714172621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2deb23ad-a230-48ea-b5b4-51f821315240 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.714349074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2deb23ad-a230-48ea-b5b4-51f821315240 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.743161855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8780feb5-9919-4224-a0cc-7e964c9668d1 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.743229699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8780feb5-9919-4224-a0cc-7e964c9668d1 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.744183634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=755a3d35-253a-4b17-8eb1-fd92d1b7491a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.744616862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722893744594400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=755a3d35-253a-4b17-8eb1-fd92d1b7491a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.745076209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4175e56-f7d5-4ea4-8e50-92c1b9b00a85 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.745139578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4175e56-f7d5-4ea4-8e50-92c1b9b00a85 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:21:33 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:21:33.745333873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4175e56-f7d5-4ea4-8e50-92c1b9b00a85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e9ecf7280a07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   63dde0eedfb8d       storage-provisioner
	2daa9e013a548       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   715148c45c3cc       busybox
	51442200af1bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   bdd6613591b3a       coredns-7c65d6cfc9-zw2tv
	9e60ae78d5610       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   a0029a9d0f699       kube-proxy-j2srm
	f8d8096ede6a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   63dde0eedfb8d       storage-provisioner
	c33ea99d25624       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   da5c364a1d9a4       kube-scheduler-default-k8s-diff-port-036892
	1bc906f9e4e94       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   35aa8150d8033       etcd-default-k8s-diff-port-036892
	2e1787441f88b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   a09666f80e3ec       kube-apiserver-default-k8s-diff-port-036892
	1346cefb50594       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   b461843050d21       kube-controller-manager-default-k8s-diff-port-036892
	
	
	==> coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52348 - 51558 "HINFO IN 9177553418246579717.8006546208789792964. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.071307243s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-036892
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-036892
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=default-k8s-diff-port-036892
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T12_01_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 12:01:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-036892
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 12:21:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 12:18:48 +0000   Mon, 04 Nov 2024 12:01:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 12:18:48 +0000   Mon, 04 Nov 2024 12:01:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 12:18:48 +0000   Mon, 04 Nov 2024 12:01:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 12:18:48 +0000   Mon, 04 Nov 2024 12:08:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.130
	  Hostname:    default-k8s-diff-port-036892
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d4dd104d5b64dbfb562ff8f868b347e
	  System UUID:                6d4dd104-d5b6-4dbf-b562-ff8f868b347e
	  Boot ID:                    e89b510a-06e9-4ef5-83b8-ce13092721c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-zw2tv                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-default-k8s-diff-port-036892                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-036892             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-036892    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-j2srm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-036892             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-6867b74b74-2wl94                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-036892 event: Registered Node default-k8s-diff-port-036892 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-036892 event: Registered Node default-k8s-diff-port-036892 in Controller
	
	
	==> dmesg <==
	[Nov 4 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.046872] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038833] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.888579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.778977] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.417030] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.067376] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058345] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064977] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.172794] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.155941] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.301994] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.279220] systemd-fstab-generator[797]: Ignoring "noauto" option for root device
	[  +0.062203] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.739363] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[Nov 4 12:08] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.942612] systemd-fstab-generator[1531]: Ignoring "noauto" option for root device
	[  +3.766955] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.875778] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] <==
	{"level":"info","ts":"2024-11-04T12:08:18.949589Z","caller":"traceutil/trace.go:171","msg":"trace[730869104] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"258.82117ms","start":"2024-11-04T12:08:18.690755Z","end":"2024-11-04T12:08:18.949576Z","steps":["trace[730869104] 'process raft request'  (duration: 258.280867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:19.529973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"521.92767ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:08:19.530200Z","caller":"traceutil/trace.go:171","msg":"trace[101329791] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:613; }","duration":"522.168049ms","start":"2024-11-04T12:08:19.008015Z","end":"2024-11-04T12:08:19.530183Z","steps":["trace[101329791] 'range keys from in-memory index tree'  (duration: 521.91616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:19.662639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"614.514224ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14102057923079962794 > lease_revoke:<id:43b492f70d09ca39>","response":"size:28"}
	{"level":"info","ts":"2024-11-04T12:08:19.662982Z","caller":"traceutil/trace.go:171","msg":"trace[1517470284] linearizableReadLoop","detail":"{readStateIndex:649; appliedIndex:648; }","duration":"709.35008ms","start":"2024-11-04T12:08:18.953613Z","end":"2024-11-04T12:08:19.662963Z","steps":["trace[1517470284] 'read index received'  (duration: 94.303688ms)","trace[1517470284] 'applied index is now lower than readState.Index'  (duration: 615.043939ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T12:08:19.663183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"654.60317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892\" ","response":"range_response_count:1 size:4578"}
	{"level":"info","ts":"2024-11-04T12:08:19.663246Z","caller":"traceutil/trace.go:171","msg":"trace[1602288727] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892; range_end:; response_count:1; response_revision:613; }","duration":"654.668992ms","start":"2024-11-04T12:08:19.008566Z","end":"2024-11-04T12:08:19.663235Z","steps":["trace[1602288727] 'agreement among raft nodes before linearized reading'  (duration: 654.56292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:19.663258Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.887564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:08:19.663472Z","caller":"traceutil/trace.go:171","msg":"trace[669355052] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:613; }","duration":"356.093196ms","start":"2024-11-04T12:08:19.307366Z","end":"2024-11-04T12:08:19.663460Z","steps":["trace[669355052] 'agreement among raft nodes before linearized reading'  (duration: 355.875616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:19.663508Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.307334Z","time spent":"356.162329ms","remote":"127.0.0.1:37206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-11-04T12:08:19.663218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.951617ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:08:19.663679Z","caller":"traceutil/trace.go:171","msg":"trace[1159867774] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:613; }","duration":"133.407507ms","start":"2024-11-04T12:08:19.530259Z","end":"2024-11-04T12:08:19.663666Z","steps":["trace[1159867774] 'agreement among raft nodes before linearized reading'  (duration: 132.944752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:19.663183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"709.56672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892\" ","response":"range_response_count:1 size:4578"}
	{"level":"info","ts":"2024-11-04T12:08:19.663962Z","caller":"traceutil/trace.go:171","msg":"trace[104920269] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892; range_end:; response_count:1; response_revision:613; }","duration":"710.348868ms","start":"2024-11-04T12:08:18.953596Z","end":"2024-11-04T12:08:19.663945Z","steps":["trace[104920269] 'agreement among raft nodes before linearized reading'  (duration: 709.4757ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:19.664012Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:18.953541Z","time spent":"710.457534ms","remote":"127.0.0.1:37412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4601,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892\" "}
	{"level":"warn","ts":"2024-11-04T12:08:19.663356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.008528Z","time spent":"654.817963ms","remote":"127.0.0.1:37412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4601,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892\" "}
	{"level":"info","ts":"2024-11-04T12:08:20.031141Z","caller":"traceutil/trace.go:171","msg":"trace[1492177279] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"351.894541ms","start":"2024-11-04T12:08:19.679231Z","end":"2024-11-04T12:08:20.031126Z","steps":["trace[1492177279] 'read index received'  (duration: 351.745552ms)","trace[1492177279] 'applied index is now lower than readState.Index'  (duration: 148.473µs)"],"step_count":2}
	{"level":"info","ts":"2024-11-04T12:08:20.031231Z","caller":"traceutil/trace.go:171","msg":"trace[912048882] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"355.185168ms","start":"2024-11-04T12:08:19.676038Z","end":"2024-11-04T12:08:20.031223Z","steps":["trace[912048882] 'process raft request'  (duration: 354.986624ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.031598Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.676023Z","time spent":"355.227757ms","remote":"127.0.0.1:37412","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4371,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892\" mod_revision:613 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892\" value_size:4293 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-036892\" > >"}
	{"level":"warn","ts":"2024-11-04T12:08:20.031921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.681682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-036892\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-11-04T12:08:20.031970Z","caller":"traceutil/trace.go:171","msg":"trace[214770135] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-036892; range_end:; response_count:1; response_revision:614; }","duration":"352.734515ms","start":"2024-11-04T12:08:19.679228Z","end":"2024-11-04T12:08:20.031962Z","steps":["trace[214770135] 'agreement among raft nodes before linearized reading'  (duration: 352.624793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.031995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.679196Z","time spent":"352.794185ms","remote":"127.0.0.1:37402","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5560,"request content":"key:\"/registry/minions/default-k8s-diff-port-036892\" "}
	{"level":"info","ts":"2024-11-04T12:18:02.822791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":857}
	{"level":"info","ts":"2024-11-04T12:18:02.832012Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":857,"took":"8.555385ms","hash":1755921559,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2801664,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-11-04T12:18:02.832125Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1755921559,"revision":857,"compact-revision":-1}
	
	
	==> kernel <==
	 12:21:34 up 13 min,  0 users,  load average: 0.17, 0.16, 0.09
	Linux default-k8s-diff-port-036892 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] <==
	E1104 12:18:05.043010       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1104 12:18:05.043105       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:18:05.044220       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:18:05.044259       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:19:05.044983       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:19:05.045128       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1104 12:19:05.045182       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:19:05.045244       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:19:05.046281       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:19:05.046336       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:21:05.046901       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:21:05.047158       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1104 12:21:05.046913       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:21:05.047239       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1104 12:21:05.048449       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:21:05.048533       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] <==
	E1104 12:16:07.745022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:16:08.194635       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:16:37.750589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:16:38.202372       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:17:07.756497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:17:08.208838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:17:37.763681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:17:38.218549       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:18:07.769140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:18:08.226244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:18:37.774529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:18:38.234724       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:18:48.043151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-036892"
	I1104 12:18:57.061584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="85.697µs"
	E1104 12:19:07.779746       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:19:08.062647       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="97.762µs"
	I1104 12:19:08.243508       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:19:37.786086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:19:38.250664       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:20:07.792358       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:20:08.258054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:20:37.797596       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:20:38.265519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:21:07.803556       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:21:08.272883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 12:08:05.657346       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 12:08:05.674182       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.130"]
	E1104 12:08:05.674247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 12:08:05.747188       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 12:08:05.747217       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 12:08:05.747239       1 server_linux.go:169] "Using iptables Proxier"
	I1104 12:08:05.749572       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 12:08:05.749987       1 server.go:483] "Version info" version="v1.31.2"
	I1104 12:08:05.750203       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:05.752623       1 config.go:328] "Starting node config controller"
	I1104 12:08:05.752682       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 12:08:05.754355       1 config.go:199] "Starting service config controller"
	I1104 12:08:05.754378       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 12:08:05.754431       1 config.go:105] "Starting endpoint slice config controller"
	I1104 12:08:05.754436       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 12:08:05.853324       1 shared_informer.go:320] Caches are synced for node config
	I1104 12:08:05.854480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 12:08:05.854524       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] <==
	I1104 12:08:01.910029       1 serving.go:386] Generated self-signed cert in-memory
	W1104 12:08:04.014787       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 12:08:04.014824       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 12:08:04.014834       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 12:08:04.014840       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 12:08:04.075552       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1104 12:08:04.075620       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:04.077999       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1104 12:08:04.078120       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 12:08:04.078156       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 12:08:04.078170       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1104 12:08:04.179044       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 12:20:23 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:23.048889     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:20:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:30.239268     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722830238886672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:30.239310     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722830238886672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:35 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:35.048656     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:20:40 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:40.240547     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722840240266922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:40 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:40.240600     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722840240266922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:49 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:49.049100     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:20:50 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:50.241984     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722850241666045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:20:50 default-k8s-diff-port-036892 kubelet[926]: E1104 12:20:50.242040     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722850241666045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:00 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:00.076927     926 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 12:21:00 default-k8s-diff-port-036892 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 12:21:00 default-k8s-diff-port-036892 kubelet[926]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 12:21:00 default-k8s-diff-port-036892 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 12:21:00 default-k8s-diff-port-036892 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 12:21:00 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:00.244127     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722860243888104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:00 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:00.244155     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722860243888104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:01 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:01.049713     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:21:10 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:10.245872     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722870245482325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:10 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:10.245917     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722870245482325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:12 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:12.050159     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:21:20 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:20.247742     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722880247301217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:20 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:20.248060     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722880247301217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:25 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:25.048787     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:21:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:30.249962     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722890249742569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:21:30.249999     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722890249742569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] <==
	I1104 12:08:36.317612       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 12:08:36.326812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 12:08:36.326926       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 12:08:53.724287       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 12:08:53.724511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-036892_8faba631-96c2-45db-944a-7948a126e32b!
	I1104 12:08:53.725980       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36bca52f-4741-4cfb-b07f-d82a6fe85686", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-036892_8faba631-96c2-45db-944a-7948a126e32b became leader
	I1104 12:08:53.825109       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-036892_8faba631-96c2-45db-944a-7948a126e32b!
	
	
	==> storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] <==
	I1104 12:08:05.525458       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1104 12:08:35.529117       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2wl94
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 describe pod metrics-server-6867b74b74-2wl94
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-036892 describe pod metrics-server-6867b74b74-2wl94: exit status 1 (63.00414ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2wl94" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-036892 describe pod metrics-server-6867b74b74-2wl94: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1104 12:14:47.409581   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:14:47.537196   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:15:14.952973   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:15:50.828598   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:16:10.600384   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-908370 -n no-preload-908370
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-11-04 12:22:22.819507281 +0000 UTC m=+6334.174566066
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-908370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-908370 logs -n 25: (1.953163777s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:04:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:04:21.684777   86402 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:04:21.684885   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.684893   86402 out.go:358] Setting ErrFile to fd 2...
	I1104 12:04:21.684897   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.685085   86402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:04:21.685618   86402 out.go:352] Setting JSON to false
	I1104 12:04:21.686501   86402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10013,"bootTime":1730711849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:04:21.686603   86402 start.go:139] virtualization: kvm guest
	I1104 12:04:21.688652   86402 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:04:21.690121   86402 notify.go:220] Checking for updates...
	I1104 12:04:21.690173   86402 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:04:21.691712   86402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:04:21.693100   86402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:04:21.694334   86402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:04:21.695431   86402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:04:21.696680   86402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:04:21.698271   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:04:21.698697   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.698738   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.713382   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I1104 12:04:21.713861   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.714357   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.714378   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.714696   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.714872   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.716711   86402 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 12:04:21.718136   86402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:04:21.718573   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.718617   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.733074   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1104 12:04:21.733525   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.733939   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.733955   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.734252   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.734410   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.770049   86402 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:04:21.771735   86402 start.go:297] selected driver: kvm2
	I1104 12:04:21.771748   86402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.771851   86402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:04:21.772615   86402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.772709   86402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:04:21.787662   86402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:04:21.788158   86402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:04:21.788201   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:04:21.788238   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:04:21.788282   86402 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.788422   86402 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.790364   86402 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 12:04:20.849476   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:20.393451   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:04:20.393484   86301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:20.393492   86301 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:20.393580   86301 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:20.393594   86301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:04:20.393670   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:04:20.393863   86301 start.go:360] acquireMachinesLock for default-k8s-diff-port-036892: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:21.791568   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:04:21.791599   86402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:21.791608   86402 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:21.791668   86402 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:21.791678   86402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 12:04:21.791755   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:04:21.791918   86402 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:26.929512   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:30.001546   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:36.081486   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:39.153496   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:45.233535   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:48.305510   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:54.385555   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:57.457513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:03.537513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:06.609487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:12.689475   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:15.761508   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:21.841502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:24.913609   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:30.993499   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:34.065502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:40.145511   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:43.217478   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:49.297518   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:52.369526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:58.449509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:01.521498   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:07.601506   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:10.673509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:16.753487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:19.825549   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:25.905526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:28.977526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:35.057466   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:38.129670   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:44.209517   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:47.281541   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:53.361542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:56.433564   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:02.513462   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:05.585513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:11.665480   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:14.737542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:17.742001   85759 start.go:364] duration metric: took 4m26.438155925s to acquireMachinesLock for "embed-certs-325116"
	I1104 12:07:17.742060   85759 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:17.742068   85759 fix.go:54] fixHost starting: 
	I1104 12:07:17.742418   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:17.742470   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:17.758611   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I1104 12:07:17.759173   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:17.759750   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:17.759774   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:17.760116   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:17.760326   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:17.760498   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:17.762313   85759 fix.go:112] recreateIfNeeded on embed-certs-325116: state=Stopped err=<nil>
	I1104 12:07:17.762335   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	W1104 12:07:17.762469   85759 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:17.764411   85759 out.go:177] * Restarting existing kvm2 VM for "embed-certs-325116" ...
	I1104 12:07:17.739255   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:17.739306   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739691   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:07:17.739718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739888   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:07:17.741864   85500 machine.go:96] duration metric: took 4m37.421766695s to provisionDockerMachine
	I1104 12:07:17.741908   85500 fix.go:56] duration metric: took 4m37.442993443s for fixHost
	I1104 12:07:17.741918   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 4m37.443015642s
	W1104 12:07:17.741938   85500 start.go:714] error starting host: provision: host is not running
	W1104 12:07:17.742034   85500 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1104 12:07:17.742044   85500 start.go:729] Will try again in 5 seconds ...
	I1104 12:07:17.765958   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Start
	I1104 12:07:17.766220   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring networks are active...
	I1104 12:07:17.767191   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network default is active
	I1104 12:07:17.767589   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network mk-embed-certs-325116 is active
	I1104 12:07:17.767984   85759 main.go:141] libmachine: (embed-certs-325116) Getting domain xml...
	I1104 12:07:17.768804   85759 main.go:141] libmachine: (embed-certs-325116) Creating domain...
	I1104 12:07:18.996135   85759 main.go:141] libmachine: (embed-certs-325116) Waiting to get IP...
	I1104 12:07:18.997002   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:18.997542   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:18.997615   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:18.997513   87021 retry.go:31] will retry after 239.606839ms: waiting for machine to come up
	I1104 12:07:19.239054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.239579   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.239602   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.239528   87021 retry.go:31] will retry after 303.459257ms: waiting for machine to come up
	I1104 12:07:19.545134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.545597   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.545633   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.545544   87021 retry.go:31] will retry after 394.511523ms: waiting for machine to come up
	I1104 12:07:19.942226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.942607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.942630   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.942576   87021 retry.go:31] will retry after 381.618515ms: waiting for machine to come up
	I1104 12:07:20.326265   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.326707   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.326738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.326651   87021 retry.go:31] will retry after 584.226748ms: waiting for machine to come up
	I1104 12:07:20.912117   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.912575   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.912607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.912524   87021 retry.go:31] will retry after 770.080519ms: waiting for machine to come up
	I1104 12:07:22.742250   85500 start.go:360] acquireMachinesLock for no-preload-908370: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:07:21.684620   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:21.685074   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:21.685103   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:21.685026   87021 retry.go:31] will retry after 1.170018806s: waiting for machine to come up
	I1104 12:07:22.856736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:22.857104   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:22.857132   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:22.857048   87021 retry.go:31] will retry after 1.467304538s: waiting for machine to come up
	I1104 12:07:24.326735   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:24.327197   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:24.327220   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:24.327148   87021 retry.go:31] will retry after 1.676202737s: waiting for machine to come up
	I1104 12:07:26.006035   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:26.006515   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:26.006538   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:26.006460   87021 retry.go:31] will retry after 1.8778328s: waiting for machine to come up
	I1104 12:07:27.886226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:27.886634   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:27.886656   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:27.886579   87021 retry.go:31] will retry after 2.886548821s: waiting for machine to come up
	I1104 12:07:30.776677   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:30.777080   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:30.777102   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:30.777039   87021 retry.go:31] will retry after 3.108966144s: waiting for machine to come up
	I1104 12:07:35.049920   86301 start.go:364] duration metric: took 3m14.656022924s to acquireMachinesLock for "default-k8s-diff-port-036892"
	I1104 12:07:35.050007   86301 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:35.050019   86301 fix.go:54] fixHost starting: 
	I1104 12:07:35.050381   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:35.050436   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:35.067928   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I1104 12:07:35.068445   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:35.068953   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:07:35.068976   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:35.069353   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:35.069560   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:35.069692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:07:35.071231   86301 fix.go:112] recreateIfNeeded on default-k8s-diff-port-036892: state=Stopped err=<nil>
	I1104 12:07:35.071252   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	W1104 12:07:35.071401   86301 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:35.073762   86301 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-036892" ...
	I1104 12:07:35.075114   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Start
	I1104 12:07:35.075311   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring networks are active...
	I1104 12:07:35.076105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network default is active
	I1104 12:07:35.076534   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network mk-default-k8s-diff-port-036892 is active
	I1104 12:07:35.076946   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Getting domain xml...
	I1104 12:07:35.077641   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Creating domain...
	I1104 12:07:33.887738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888147   85759 main.go:141] libmachine: (embed-certs-325116) Found IP for machine: 192.168.39.47
	I1104 12:07:33.888176   85759 main.go:141] libmachine: (embed-certs-325116) Reserving static IP address...
	I1104 12:07:33.888206   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has current primary IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888737   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.888769   85759 main.go:141] libmachine: (embed-certs-325116) DBG | skip adding static IP to network mk-embed-certs-325116 - found existing host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"}
	I1104 12:07:33.888783   85759 main.go:141] libmachine: (embed-certs-325116) Reserved static IP address: 192.168.39.47
	I1104 12:07:33.888795   85759 main.go:141] libmachine: (embed-certs-325116) Waiting for SSH to be available...
	I1104 12:07:33.888812   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Getting to WaitForSSH function...
	I1104 12:07:33.891130   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891493   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.891520   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891670   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH client type: external
	I1104 12:07:33.891693   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa (-rw-------)
	I1104 12:07:33.891732   85759 main.go:141] libmachine: (embed-certs-325116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:33.891748   85759 main.go:141] libmachine: (embed-certs-325116) DBG | About to run SSH command:
	I1104 12:07:33.891773   85759 main.go:141] libmachine: (embed-certs-325116) DBG | exit 0
	I1104 12:07:34.012989   85759 main.go:141] libmachine: (embed-certs-325116) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:34.013457   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetConfigRaw
	I1104 12:07:34.014162   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.016645   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017028   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.017062   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017347   85759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/config.json ...
	I1104 12:07:34.017577   85759 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:34.017596   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.017824   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.020134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020416   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.020449   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020570   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.020745   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.020905   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.021059   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.021313   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.021505   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.021515   85759 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:34.125266   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:34.125305   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125556   85759 buildroot.go:166] provisioning hostname "embed-certs-325116"
	I1104 12:07:34.125583   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125781   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.128180   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128486   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.128514   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128603   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.128758   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128890   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.129166   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.129371   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.129394   85759 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-325116 && echo "embed-certs-325116" | sudo tee /etc/hostname
	I1104 12:07:34.242027   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-325116
	
	I1104 12:07:34.242054   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.244671   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.244984   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.245019   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.245159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.245337   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245514   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245661   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.245810   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.245971   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.245986   85759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-325116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-325116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-325116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:34.357178   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:34.357204   85759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:34.357220   85759 buildroot.go:174] setting up certificates
	I1104 12:07:34.357241   85759 provision.go:84] configureAuth start
	I1104 12:07:34.357250   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.357533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.359993   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360308   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.360327   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.362459   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362750   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.362786   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362932   85759 provision.go:143] copyHostCerts
	I1104 12:07:34.362986   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:34.363022   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:34.363109   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:34.363231   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:34.363242   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:34.363282   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:34.363357   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:34.363368   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:34.363399   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:34.363503   85759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.embed-certs-325116 san=[127.0.0.1 192.168.39.47 embed-certs-325116 localhost minikube]
	I1104 12:07:34.453223   85759 provision.go:177] copyRemoteCerts
	I1104 12:07:34.453295   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:34.453317   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.455736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456022   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.456054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456230   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.456406   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.456539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.456631   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.539172   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:34.561889   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:07:34.585111   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:07:34.607449   85759 provision.go:87] duration metric: took 250.195255ms to configureAuth
	I1104 12:07:34.607495   85759 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:34.607809   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:34.607952   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.610672   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611009   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.611032   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611253   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.611444   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611600   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611739   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.611917   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.612086   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.612101   85759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:34.823086   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:34.823114   85759 machine.go:96] duration metric: took 805.522353ms to provisionDockerMachine
	I1104 12:07:34.823128   85759 start.go:293] postStartSetup for "embed-certs-325116" (driver="kvm2")
	I1104 12:07:34.823138   85759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:34.823174   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.823451   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:34.823489   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.826112   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826453   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.826482   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826581   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.826756   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.826886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.826998   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.907354   85759 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:34.911229   85759 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:34.911246   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:34.911316   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:34.911402   85759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:34.911516   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:34.920149   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:34.942468   85759 start.go:296] duration metric: took 119.32654ms for postStartSetup
	I1104 12:07:34.942517   85759 fix.go:56] duration metric: took 17.200448721s for fixHost
	I1104 12:07:34.942540   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.945295   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945659   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.945685   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945847   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.946006   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946173   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946311   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.946442   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.946583   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.946592   85759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:35.049767   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722055.017047529
	
	I1104 12:07:35.049790   85759 fix.go:216] guest clock: 1730722055.017047529
	I1104 12:07:35.049797   85759 fix.go:229] Guest: 2024-11-04 12:07:35.017047529 +0000 UTC Remote: 2024-11-04 12:07:34.942522008 +0000 UTC m=+283.781167350 (delta=74.525521ms)
	I1104 12:07:35.049829   85759 fix.go:200] guest clock delta is within tolerance: 74.525521ms
	I1104 12:07:35.049834   85759 start.go:83] releasing machines lock for "embed-certs-325116", held for 17.307794416s
	I1104 12:07:35.049859   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.050137   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:35.052845   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053238   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.053269   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054239   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054337   85759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:35.054383   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.054502   85759 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:35.054539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.057289   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057391   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057733   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057778   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057802   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057820   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057959   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.057996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.058110   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058296   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058316   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.058658   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.134602   85759 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:35.158961   85759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:35.303038   85759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:35.309611   85759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:35.309674   85759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:35.325082   85759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:35.325142   85759 start.go:495] detecting cgroup driver to use...
	I1104 12:07:35.325211   85759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:35.341673   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:35.355506   85759 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:35.355569   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:35.369017   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:35.382745   85759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:35.498985   85759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:35.648628   85759 docker.go:233] disabling docker service ...
	I1104 12:07:35.648702   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:35.666912   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:35.679786   85759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:35.799284   85759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:35.931842   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:35.945707   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:35.965183   85759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:35.965269   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.975446   85759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:35.975514   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.985968   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.996462   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.006840   85759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:36.017174   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.027013   85759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.044572   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.054046   85759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:36.063355   85759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:36.063399   85759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:36.075157   85759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:36.084713   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:36.205088   85759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:36.299330   85759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:36.299423   85759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:36.304194   85759 start.go:563] Will wait 60s for crictl version
	I1104 12:07:36.304248   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:07:36.308041   85759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:36.349114   85759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:36.349264   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.378677   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.406751   85759 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:36.335603   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting to get IP...
	I1104 12:07:36.336431   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.336921   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.337007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.336911   87142 retry.go:31] will retry after 289.750795ms: waiting for machine to come up
	I1104 12:07:36.628712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629301   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.629345   87142 retry.go:31] will retry after 356.596321ms: waiting for machine to come up
	I1104 12:07:36.988173   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988663   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988713   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.988626   87142 retry.go:31] will retry after 446.62367ms: waiting for machine to come up
	I1104 12:07:37.437529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438120   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438174   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.438023   87142 retry.go:31] will retry after 482.072639ms: waiting for machine to come up
	I1104 12:07:37.921514   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922025   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922056   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.921983   87142 retry.go:31] will retry after 645.10615ms: waiting for machine to come up
	I1104 12:07:38.569009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569524   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569566   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:38.569432   87142 retry.go:31] will retry after 841.352802ms: waiting for machine to come up
	I1104 12:07:39.412662   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413091   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413112   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:39.413047   87142 retry.go:31] will retry after 878.218722ms: waiting for machine to come up
	I1104 12:07:36.407939   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:36.411021   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411378   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:36.411408   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411599   85759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:36.415528   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:36.427484   85759 kubeadm.go:883] updating cluster {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:36.427616   85759 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:36.427684   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:36.460332   85759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:36.460406   85759 ssh_runner.go:195] Run: which lz4
	I1104 12:07:36.464187   85759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:36.468140   85759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:36.468177   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:37.703067   85759 crio.go:462] duration metric: took 1.238901186s to copy over tarball
	I1104 12:07:37.703136   85759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:39.803761   85759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100578378s)
	I1104 12:07:39.803795   85759 crio.go:469] duration metric: took 2.100697698s to extract the tarball
	I1104 12:07:39.803805   85759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:39.840536   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:39.883410   85759 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:39.883431   85759 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:39.883438   85759 kubeadm.go:934] updating node { 192.168.39.47 8443 v1.31.2 crio true true} ...
	I1104 12:07:39.883531   85759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-325116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:39.883608   85759 ssh_runner.go:195] Run: crio config
	I1104 12:07:39.928280   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:39.928303   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:39.928313   85759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:39.928333   85759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-325116 NodeName:embed-certs-325116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:39.928440   85759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-325116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:39.928495   85759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:39.938496   85759 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:39.938568   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:39.947809   85759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1104 12:07:39.963319   85759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:39.978789   85759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1104 12:07:39.994910   85759 ssh_runner.go:195] Run: grep 192.168.39.47	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:39.998355   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:40.010097   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:40.118679   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:40.134369   85759 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116 for IP: 192.168.39.47
	I1104 12:07:40.134391   85759 certs.go:194] generating shared ca certs ...
	I1104 12:07:40.134429   85759 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:40.134612   85759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:40.134666   85759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:40.134680   85759 certs.go:256] generating profile certs ...
	I1104 12:07:40.134782   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/client.key
	I1104 12:07:40.134880   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key.36f6fb66
	I1104 12:07:40.134929   85759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key
	I1104 12:07:40.135083   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:40.135124   85759 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:40.135140   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:40.135225   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:40.135281   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:40.135315   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:40.135380   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:40.136240   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:40.179608   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:40.227851   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:40.255791   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:40.281672   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1104 12:07:40.305960   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:07:40.332465   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:40.354950   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:07:40.377476   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:40.399291   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:40.420689   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:40.443610   85759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:40.459706   85759 ssh_runner.go:195] Run: openssl version
	I1104 12:07:40.465244   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:40.475375   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479676   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479748   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.485523   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:40.497163   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:40.509090   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513617   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513685   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.519372   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:40.530944   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:40.542569   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.546965   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.547019   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.552470   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:40.562456   85759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:40.566967   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:40.572778   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:40.578409   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:40.584134   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:40.589880   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:40.595604   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:40.601191   85759 kubeadm.go:392] StartCluster: {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:40.601329   85759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:40.601385   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.642970   85759 cri.go:89] found id: ""
	I1104 12:07:40.643034   85759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:40.653420   85759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:40.653446   85759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:40.653496   85759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:40.663023   85759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:40.664008   85759 kubeconfig.go:125] found "embed-certs-325116" server: "https://192.168.39.47:8443"
	I1104 12:07:40.665967   85759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:40.675296   85759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.47
	I1104 12:07:40.675324   85759 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:40.675336   85759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:40.675384   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.718457   85759 cri.go:89] found id: ""
	I1104 12:07:40.718543   85759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:40.733875   85759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:40.743811   85759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:40.743835   85759 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:40.743889   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:07:40.752987   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:40.753048   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:40.762296   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:07:40.771048   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:40.771112   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:40.780163   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.789500   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:40.789566   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.799200   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:07:40.808061   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:40.808121   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:40.817445   85759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:40.826803   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.934345   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.292591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293050   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293084   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:40.292988   87142 retry.go:31] will retry after 1.110341741s: waiting for machine to come up
	I1104 12:07:41.405407   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405858   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405885   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:41.405800   87142 retry.go:31] will retry after 1.311587036s: waiting for machine to come up
	I1104 12:07:42.719157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:42.719530   87142 retry.go:31] will retry after 1.999866716s: waiting for machine to come up
	I1104 12:07:44.721872   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722324   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722351   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:44.722278   87142 retry.go:31] will retry after 2.895241769s: waiting for machine to come up
	I1104 12:07:41.512710   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.729355   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.807064   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.888493   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:07:41.888593   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.389674   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.889373   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.389705   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.889548   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.924248   85759 api_server.go:72] duration metric: took 2.035753888s to wait for apiserver process to appear ...
	I1104 12:07:43.924277   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:07:43.924320   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:43.924831   85759 api_server.go:269] stopped: https://192.168.39.47:8443/healthz: Get "https://192.168.39.47:8443/healthz": dial tcp 192.168.39.47:8443: connect: connection refused
	I1104 12:07:44.424651   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.043002   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.043037   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.043054   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.104246   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.104276   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.424506   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.430506   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.430544   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:47.924409   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.937055   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.937083   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:48.424568   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:48.428850   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:07:48.436388   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:07:48.436411   85759 api_server.go:131] duration metric: took 4.512127349s to wait for apiserver health ...
	I1104 12:07:48.436420   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:48.436427   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:48.438220   85759 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:07:48.439495   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:07:48.449650   85759 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:07:48.467313   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:07:48.480777   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:07:48.480823   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:07:48.480834   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:07:48.480845   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:07:48.480859   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:07:48.480876   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:07:48.480893   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:07:48.480907   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:07:48.480916   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:07:48.480928   85759 system_pods.go:74] duration metric: took 13.592864ms to wait for pod list to return data ...
	I1104 12:07:48.480947   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:07:48.487234   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:07:48.487271   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:07:48.487284   85759 node_conditions.go:105] duration metric: took 6.331259ms to run NodePressure ...
	I1104 12:07:48.487313   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:48.756654   85759 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764840   85759 kubeadm.go:739] kubelet initialised
	I1104 12:07:48.764863   85759 kubeadm.go:740] duration metric: took 8.175857ms waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764871   85759 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:48.772653   85759 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.784158   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784198   85759 pod_ready.go:82] duration metric: took 11.515605ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.784211   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784220   85759 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.791264   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791297   85759 pod_ready.go:82] duration metric: took 7.066247ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.791310   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791326   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.798259   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798294   85759 pod_ready.go:82] duration metric: took 6.954559ms for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.798304   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798312   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.872019   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872058   85759 pod_ready.go:82] duration metric: took 73.723761ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.872069   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872075   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.271210   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271252   85759 pod_ready.go:82] duration metric: took 399.167509ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.271264   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271272   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.671430   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671453   85759 pod_ready.go:82] duration metric: took 400.174495ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.671469   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671475   85759 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:50.070546   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070576   85759 pod_ready.go:82] duration metric: took 399.092108ms for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:50.070587   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070596   85759 pod_ready.go:39] duration metric: took 1.305717298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:50.070615   85759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:07:50.082815   85759 ops.go:34] apiserver oom_adj: -16
	I1104 12:07:50.082839   85759 kubeadm.go:597] duration metric: took 9.429385589s to restartPrimaryControlPlane
	I1104 12:07:50.082850   85759 kubeadm.go:394] duration metric: took 9.481667011s to StartCluster
	I1104 12:07:50.082871   85759 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.082952   85759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:07:50.086014   85759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.086562   85759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:07:50.086628   85759 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:07:50.086740   85759 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-325116"
	I1104 12:07:50.086763   85759 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-325116"
	I1104 12:07:50.086765   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1104 12:07:50.086776   85759 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:07:50.086774   85759 addons.go:69] Setting default-storageclass=true in profile "embed-certs-325116"
	I1104 12:07:50.086803   85759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-325116"
	I1104 12:07:50.086787   85759 addons.go:69] Setting metrics-server=true in profile "embed-certs-325116"
	I1104 12:07:50.086812   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.086825   85759 addons.go:234] Setting addon metrics-server=true in "embed-certs-325116"
	W1104 12:07:50.086837   85759 addons.go:243] addon metrics-server should already be in state true
	I1104 12:07:50.086866   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.087120   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087148   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087160   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087178   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087247   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087286   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.088320   85759 out.go:177] * Verifying Kubernetes components...
	I1104 12:07:50.089814   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:50.102796   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I1104 12:07:50.102976   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1104 12:07:50.103076   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1104 12:07:50.103462   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103491   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103566   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103990   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104014   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104085   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104101   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104199   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104223   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104368   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104402   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104545   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.104559   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104949   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.104987   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.105081   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.105116   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.108134   85759 addons.go:234] Setting addon default-storageclass=true in "embed-certs-325116"
	W1104 12:07:50.108161   85759 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:07:50.108193   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.108597   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.108648   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.121556   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I1104 12:07:50.122038   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.122504   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.122527   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.122869   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.123107   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.125142   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.125294   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I1104 12:07:50.125613   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.125972   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.125988   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.126279   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.126399   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.127256   85759 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:07:50.127993   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1104 12:07:50.128235   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.128597   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.128843   85759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.128864   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:07:50.128883   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.129066   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.129088   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.129389   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.129882   85759 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:07:47.619516   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620045   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:47.620000   87142 retry.go:31] will retry after 3.554669963s: waiting for machine to come up
	I1104 12:07:50.130127   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.130187   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.131115   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:07:50.131134   85759 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:07:50.131154   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.131899   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132352   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.132375   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132664   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.132830   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.132986   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.133099   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.134698   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135217   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.135246   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.135629   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.135765   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.135908   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.146618   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1104 12:07:50.147639   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.148281   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.148307   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.148617   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.148860   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.150751   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.151010   85759 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.151028   85759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:07:50.151050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.153947   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154385   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.154418   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154560   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.154749   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.154886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.155028   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.278380   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:50.294682   85759 node_ready.go:35] waiting up to 6m0s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:50.355769   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:07:50.355790   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:07:50.375818   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.404741   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:07:50.404766   85759 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:07:50.466718   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.466748   85759 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:07:50.493662   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.503255   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.799735   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.799772   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800039   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800086   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.800094   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:50.800107   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.800159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800382   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800394   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.810586   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.810857   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.810876   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810893   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.484326   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484356   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484671   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484687   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484695   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484702   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484899   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484938   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484950   85759 addons.go:475] Verifying addon metrics-server=true in "embed-certs-325116"
	I1104 12:07:51.549507   85759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.046214827s)
	I1104 12:07:51.549559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549569   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.549886   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.549906   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.549909   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.549916   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549923   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.550143   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.550164   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.552039   85759 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1104 12:07:52.573915   86402 start.go:364] duration metric: took 3m30.781955626s to acquireMachinesLock for "old-k8s-version-589257"
	I1104 12:07:52.573984   86402 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:52.573996   86402 fix.go:54] fixHost starting: 
	I1104 12:07:52.574443   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:52.574500   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:52.594310   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1104 12:07:52.594822   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:52.595317   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:07:52.595347   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:52.595727   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:52.595924   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:07:52.596093   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 12:07:52.597578   86402 fix.go:112] recreateIfNeeded on old-k8s-version-589257: state=Stopped err=<nil>
	I1104 12:07:52.597615   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	W1104 12:07:52.597752   86402 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:52.599659   86402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	I1104 12:07:51.176791   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177282   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Found IP for machine: 192.168.72.130
	I1104 12:07:51.177313   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has current primary IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177325   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserving static IP address...
	I1104 12:07:51.177817   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.177863   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | skip adding static IP to network mk-default-k8s-diff-port-036892 - found existing host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"}
	I1104 12:07:51.177876   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserved static IP address: 192.168.72.130
	I1104 12:07:51.177890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for SSH to be available...
	I1104 12:07:51.177897   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Getting to WaitForSSH function...
	I1104 12:07:51.180038   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180440   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.180466   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH client type: external
	I1104 12:07:51.180611   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa (-rw-------)
	I1104 12:07:51.180747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:51.180777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | About to run SSH command:
	I1104 12:07:51.180795   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | exit 0
	I1104 12:07:51.309075   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:51.309445   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetConfigRaw
	I1104 12:07:51.310162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.312651   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313061   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.313090   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313460   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:07:51.313720   86301 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:51.313747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:51.313926   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.316269   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316782   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.316829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316937   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.317162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317335   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317598   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.317777   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.317981   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.317994   86301 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:51.441588   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:51.441626   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.441876   86301 buildroot.go:166] provisioning hostname "default-k8s-diff-port-036892"
	I1104 12:07:51.441902   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.442097   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.445155   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445637   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.445670   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445820   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.446013   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446186   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446352   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.446539   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.446753   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.446773   86301 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-036892 && echo "default-k8s-diff-port-036892" | sudo tee /etc/hostname
	I1104 12:07:51.578973   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-036892
	
	I1104 12:07:51.579004   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.581759   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.582135   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582299   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.582455   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582582   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.582834   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.583006   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.583022   86301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-036892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-036892/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-036892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:51.702410   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:51.702441   86301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:51.702471   86301 buildroot.go:174] setting up certificates
	I1104 12:07:51.702483   86301 provision.go:84] configureAuth start
	I1104 12:07:51.702492   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.702789   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.705067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.705449   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705567   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.707341   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707627   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.707658   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707748   86301 provision.go:143] copyHostCerts
	I1104 12:07:51.707805   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:51.707818   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:51.707870   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:51.707969   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:51.707978   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:51.707999   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:51.708061   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:51.708067   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:51.708085   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:51.708132   86301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-036892 san=[127.0.0.1 192.168.72.130 default-k8s-diff-port-036892 localhost minikube]
	I1104 12:07:51.935898   86301 provision.go:177] copyRemoteCerts
	I1104 12:07:51.935973   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:51.936008   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.938722   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939100   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.939134   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939266   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.939462   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.939609   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.939786   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.027147   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:52.054828   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1104 12:07:52.078755   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 12:07:52.101312   86301 provision.go:87] duration metric: took 398.817409ms to configureAuth
	I1104 12:07:52.101338   86301 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:52.101523   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:52.101608   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.104187   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104549   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.104581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104700   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.104890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105028   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.105319   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.105490   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.105514   86301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:52.331840   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:52.331865   86301 machine.go:96] duration metric: took 1.018128337s to provisionDockerMachine
	I1104 12:07:52.331875   86301 start.go:293] postStartSetup for "default-k8s-diff-port-036892" (driver="kvm2")
	I1104 12:07:52.331884   86301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:52.331898   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.332229   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:52.332261   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.334710   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335005   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.335036   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335176   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.335342   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.335447   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.335547   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.419392   86301 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:52.423306   86301 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:52.423335   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:52.423396   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:52.423483   86301 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:52.423575   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:52.432625   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:52.456616   86301 start.go:296] duration metric: took 124.726284ms for postStartSetup
	I1104 12:07:52.456664   86301 fix.go:56] duration metric: took 17.406645021s for fixHost
	I1104 12:07:52.456689   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.459189   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.459573   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.459967   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460093   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460218   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.460349   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.460521   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.460533   86301 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:52.573760   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722072.546172571
	
	I1104 12:07:52.573781   86301 fix.go:216] guest clock: 1730722072.546172571
	I1104 12:07:52.573787   86301 fix.go:229] Guest: 2024-11-04 12:07:52.546172571 +0000 UTC Remote: 2024-11-04 12:07:52.45666981 +0000 UTC m=+212.207079326 (delta=89.502761ms)
	I1104 12:07:52.573827   86301 fix.go:200] guest clock delta is within tolerance: 89.502761ms
	I1104 12:07:52.573832   86301 start.go:83] releasing machines lock for "default-k8s-diff-port-036892", held for 17.523849814s
	I1104 12:07:52.573856   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.574109   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:52.576773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577125   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.577151   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577304   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577776   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577950   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.578043   86301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:52.578079   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.578133   86301 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:52.578159   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.580773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.580909   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581154   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581179   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581196   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581286   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581488   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581660   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581677   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581770   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.581823   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581946   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.683801   86301 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:52.689498   86301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:52.830236   86301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:52.835868   86301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:52.835951   86301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:52.851557   86301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:52.851586   86301 start.go:495] detecting cgroup driver to use...
	I1104 12:07:52.851656   86301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:52.868648   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:52.883434   86301 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:52.883507   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:52.898233   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:52.912615   86301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:53.036342   86301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:53.183326   86301 docker.go:233] disabling docker service ...
	I1104 12:07:53.183407   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:53.197465   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:53.210118   86301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:53.354857   86301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:53.490760   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:53.506829   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:53.526401   86301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:53.526464   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.537264   86301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:53.537339   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.547882   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.558039   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.569347   86301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:53.579931   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.589594   86301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.606753   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.623316   86301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:53.638183   86301 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:53.638245   86301 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:53.656452   86301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:53.666343   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:53.784882   86301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:53.879727   86301 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:53.879790   86301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:53.884438   86301 start.go:563] Will wait 60s for crictl version
	I1104 12:07:53.884494   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:07:53.887785   86301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:53.926395   86301 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:53.926496   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.963049   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.996513   86301 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:53.997774   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:54.000829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001214   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:54.001300   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001469   86301 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:54.005521   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:54.021723   86301 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:54.021915   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:54.021979   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:54.072114   86301 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:54.072178   86301 ssh_runner.go:195] Run: which lz4
	I1104 12:07:54.077106   86301 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:54.081979   86301 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:54.082018   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:51.553141   85759 addons.go:510] duration metric: took 1.466523338s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1104 12:07:52.298494   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:54.299895   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:52.600997   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .Start
	I1104 12:07:52.601180   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 12:07:52.602131   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 12:07:52.602560   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 12:07:52.603030   86402 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 12:07:52.603859   86402 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 12:07:53.855214   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 12:07:53.856063   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:53.856539   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:53.856594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:53.856513   87367 retry.go:31] will retry after 268.725451ms: waiting for machine to come up
	I1104 12:07:54.127094   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.127584   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.127612   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.127560   87367 retry.go:31] will retry after 239.665225ms: waiting for machine to come up
	I1104 12:07:54.369139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.369777   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.369798   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.369710   87367 retry.go:31] will retry after 386.228261ms: waiting for machine to come up
	I1104 12:07:54.757191   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.757637   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.757665   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.757591   87367 retry.go:31] will retry after 571.244573ms: waiting for machine to come up
	I1104 12:07:55.330439   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.331187   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.331216   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.331144   87367 retry.go:31] will retry after 539.328185ms: waiting for machine to come up
	I1104 12:07:55.871869   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.872373   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.872403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.872335   87367 retry.go:31] will retry after 879.285089ms: waiting for machine to come up
	I1104 12:07:55.376802   86301 crio.go:462] duration metric: took 1.299729399s to copy over tarball
	I1104 12:07:55.376881   86301 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:57.716230   86301 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.339307666s)
	I1104 12:07:57.716268   86301 crio.go:469] duration metric: took 2.339436958s to extract the tarball
	I1104 12:07:57.716277   86301 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:57.753216   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:57.799042   86301 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:57.799145   86301 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:57.799161   86301 kubeadm.go:934] updating node { 192.168.72.130 8444 v1.31.2 crio true true} ...
	I1104 12:07:57.799273   86301 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-036892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:57.799347   86301 ssh_runner.go:195] Run: crio config
	I1104 12:07:57.851871   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:07:57.851892   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:57.851900   86301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:57.851919   86301 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-036892 NodeName:default-k8s-diff-port-036892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:57.852056   86301 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-036892"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:57.852116   86301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:57.862269   86301 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:57.862343   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:57.872253   86301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1104 12:07:57.889328   86301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:57.908250   86301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1104 12:07:57.926081   86301 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:57.929870   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:57.943872   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:58.070141   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:58.089370   86301 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892 for IP: 192.168.72.130
	I1104 12:07:58.089397   86301 certs.go:194] generating shared ca certs ...
	I1104 12:07:58.089423   86301 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:58.089596   86301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:58.089647   86301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:58.089659   86301 certs.go:256] generating profile certs ...
	I1104 12:07:58.089765   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/client.key
	I1104 12:07:58.089831   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key.713851b2
	I1104 12:07:58.089889   86301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key
	I1104 12:07:58.090054   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:58.090100   86301 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:58.090116   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:58.090149   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:58.090184   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:58.090219   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:58.090279   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:58.090977   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:58.125282   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:58.168289   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:58.210967   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:58.253986   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 12:07:58.280769   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:07:58.308406   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:58.334250   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:07:58.363224   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:58.391795   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:58.420782   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:58.446611   86301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:58.465895   86301 ssh_runner.go:195] Run: openssl version
	I1104 12:07:58.471614   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:58.482139   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486533   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486591   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.492217   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:58.502724   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:58.514146   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518243   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518303   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.523579   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:58.533993   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:58.544137   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548190   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548250   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.553714   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:58.564221   86301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:58.568445   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:58.574072   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:58.579551   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:58.584909   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:58.590102   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:58.595227   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:58.600338   86301 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:58.600445   86301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:58.600492   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.634282   86301 cri.go:89] found id: ""
	I1104 12:07:58.634352   86301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:58.644578   86301 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:58.644597   86301 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:58.644635   86301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:58.654412   86301 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:58.655638   86301 kubeconfig.go:125] found "default-k8s-diff-port-036892" server: "https://192.168.72.130:8444"
	I1104 12:07:58.658639   86301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:58.667867   86301 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I1104 12:07:58.667900   86301 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:58.667913   86301 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:58.667971   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.702765   86301 cri.go:89] found id: ""
	I1104 12:07:58.702844   86301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:58.718368   86301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:58.727671   86301 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:58.727690   86301 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:58.727750   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1104 12:07:58.736350   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:58.736424   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:58.745441   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1104 12:07:58.753945   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:58.754011   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:58.763134   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.771588   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:58.771651   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.780623   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1104 12:07:58.788962   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:58.789036   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:58.798472   86301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:58.808209   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:58.919153   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.679355   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.889628   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.958981   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:00.048061   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:00.048158   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:56.798747   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:57.799286   85759 node_ready.go:49] node "embed-certs-325116" has status "Ready":"True"
	I1104 12:07:57.799308   85759 node_ready.go:38] duration metric: took 7.504592975s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:57.799319   85759 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:57.805595   85759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812394   85759 pod_ready.go:93] pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.812421   85759 pod_ready.go:82] duration metric: took 6.791823ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812434   85759 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818338   85759 pod_ready.go:93] pod "etcd-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.818359   85759 pod_ready.go:82] duration metric: took 5.916571ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818400   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:00.015222   85759 pod_ready.go:103] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"False"
	I1104 12:07:56.752983   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:56.753577   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:56.753613   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:56.753542   87367 retry.go:31] will retry after 1.081359862s: waiting for machine to come up
	I1104 12:07:57.836518   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:57.836963   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:57.836990   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:57.836914   87367 retry.go:31] will retry after 1.149571097s: waiting for machine to come up
	I1104 12:07:58.987694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:58.988125   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:58.988152   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:58.988077   87367 retry.go:31] will retry after 1.247311806s: waiting for machine to come up
	I1104 12:08:00.237634   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:00.238147   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:00.238217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:00.238109   87367 retry.go:31] will retry after 2.058125339s: waiting for machine to come up
	I1104 12:08:00.549003   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.048325   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.548502   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.563976   86301 api_server.go:72] duration metric: took 1.515915725s to wait for apiserver process to appear ...
	I1104 12:08:01.564003   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:01.564021   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.008662   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.008689   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.008701   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.033053   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.033085   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.064261   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.084034   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.084062   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:04.564564   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.570062   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.570090   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.064688   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.069572   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:05.069600   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.564628   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.570537   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:08:05.577335   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:05.577360   86301 api_server.go:131] duration metric: took 4.01335048s to wait for apiserver health ...
	I1104 12:08:05.577371   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:08:05.577379   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:05.578990   86301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:01.824677   85759 pod_ready.go:93] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.824703   85759 pod_ready.go:82] duration metric: took 4.006292816s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.824717   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833386   85759 pod_ready.go:93] pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.833415   85759 pod_ready.go:82] duration metric: took 8.688522ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833428   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839346   85759 pod_ready.go:93] pod "kube-proxy-phzgx" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.839370   85759 pod_ready.go:82] duration metric: took 5.933971ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839379   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844449   85759 pod_ready.go:93] pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.844476   85759 pod_ready.go:82] duration metric: took 5.08969ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844490   85759 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:03.852871   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:02.298631   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:02.299046   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:02.299079   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:02.298978   87367 retry.go:31] will retry after 2.664667046s: waiting for machine to come up
	I1104 12:08:04.964700   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:04.965185   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:04.965209   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:04.965135   87367 retry.go:31] will retry after 2.716802395s: waiting for machine to come up
	I1104 12:08:05.580188   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:05.591930   86301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:05.609969   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:05.621524   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:05.621559   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:05.621579   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:05.621590   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:05.621599   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:05.621609   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:05.621623   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:05.621637   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:05.621646   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:05.621656   86301 system_pods.go:74] duration metric: took 11.668493ms to wait for pod list to return data ...
	I1104 12:08:05.621669   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:05.626555   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:05.626583   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:05.626600   86301 node_conditions.go:105] duration metric: took 4.924748ms to run NodePressure ...
	I1104 12:08:05.626620   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:05.899159   86301 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905004   86301 kubeadm.go:739] kubelet initialised
	I1104 12:08:05.905027   86301 kubeadm.go:740] duration metric: took 5.831926ms waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905035   86301 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:05.910301   86301 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.917517   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917552   86301 pod_ready.go:82] duration metric: took 7.223252ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.917564   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917577   86301 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.924077   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924108   86301 pod_ready.go:82] duration metric: took 6.519268ms for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.924123   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924133   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.929584   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929611   86301 pod_ready.go:82] duration metric: took 5.464108ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.929625   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929640   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.013629   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013655   86301 pod_ready.go:82] duration metric: took 84.003349ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.013666   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013674   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.413337   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413362   86301 pod_ready.go:82] duration metric: took 399.676932ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.413372   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413379   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.813910   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813948   86301 pod_ready.go:82] duration metric: took 400.558541ms for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.813962   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813971   86301 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:07.213603   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213632   86301 pod_ready.go:82] duration metric: took 399.645898ms for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:07.213642   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213650   86301 pod_ready.go:39] duration metric: took 1.308606058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:07.213664   86301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:07.224946   86301 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:07.224972   86301 kubeadm.go:597] duration metric: took 8.580368331s to restartPrimaryControlPlane
	I1104 12:08:07.224984   86301 kubeadm.go:394] duration metric: took 8.624649305s to StartCluster
	I1104 12:08:07.225005   86301 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.225093   86301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:07.226601   86301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.226848   86301 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:07.226980   86301 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:07.227075   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:07.227096   86301 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227115   86301 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:07.227110   86301 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-036892"
	W1104 12:08:07.227128   86301 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:07.227145   86301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-036892"
	I1104 12:08:07.227161   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227082   86301 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227275   86301 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.227291   86301 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:07.227316   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227494   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227529   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227592   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227620   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227634   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227655   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.228583   86301 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:07.229927   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:07.242580   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I1104 12:08:07.243096   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.243659   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.243678   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.243954   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I1104 12:08:07.244058   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.244513   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.244634   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.244679   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245015   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.245035   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.245437   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.245905   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.245942   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245963   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I1104 12:08:07.246281   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.246725   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.246748   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.247084   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.247294   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.250833   86301 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.250857   86301 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:07.250884   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.251243   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.251285   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.261670   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1104 12:08:07.261736   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I1104 12:08:07.262154   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262283   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262803   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262821   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.262916   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262927   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.263218   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263282   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263411   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.263457   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.265067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.265574   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.267307   86301 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:07.267336   86301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:07.268853   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:07.268874   86301 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:07.268895   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.268976   86301 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.268994   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:07.269011   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.271584   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I1104 12:08:07.272047   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.272347   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272377   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272688   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.272707   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.272933   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.272959   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272990   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.273007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.273065   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.273149   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273564   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.273597   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.273765   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273767   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273925   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273966   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274049   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274098   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.274179   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.288474   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I1104 12:08:07.288955   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.289555   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.289580   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.289915   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.290128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.291744   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.291944   86301 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.291958   86301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:07.291972   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.294477   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.294793   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.294824   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.295009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.295178   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.295326   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.295444   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.430295   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:07.461396   86301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:07.523117   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.542339   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:07.542361   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:07.566207   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:07.566232   86301 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:07.580871   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.596309   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:07.596338   86301 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:07.626662   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:08.553268   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553295   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553315   86301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030165078s)
	I1104 12:08:08.553352   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553373   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553656   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553673   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553683   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553739   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553759   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553767   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553780   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553925   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553942   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.554106   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.554138   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.554155   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.559615   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.559635   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.559944   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.559961   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.563833   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.563848   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564636   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564653   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564666   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.564671   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564894   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564906   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564912   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564940   86301 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:08.566838   86301 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:08.568165   86301 addons.go:510] duration metric: took 1.341200959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:09.465405   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.350759   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:08.850563   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:10.851315   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:07.683582   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:07.684143   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:07.684172   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:07.684093   87367 retry.go:31] will retry after 2.880856513s: waiting for machine to come up
	I1104 12:08:10.566197   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.566657   86402 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 12:08:10.566675   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 12:08:10.566687   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.567139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.567166   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 12:08:10.567186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | skip adding static IP to network mk-old-k8s-version-589257 - found existing host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"}
	I1104 12:08:10.567199   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 12:08:10.567213   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 12:08:10.569500   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569816   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.569846   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 12:08:10.570004   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 12:08:10.570025   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:10.570033   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 12:08:10.570041   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 12:08:10.697114   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:10.697552   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 12:08:10.698196   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:10.700982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701369   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.701403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701649   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:08:10.701875   86402 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:10.701898   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:10.702099   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.704605   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.704977   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.705006   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.705151   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.705342   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705486   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705602   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.705703   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.705907   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.705918   86402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:10.813494   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:10.813544   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.813816   86402 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 12:08:10.813847   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.814034   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.816782   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.817245   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817394   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.817589   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817760   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817882   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.818027   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.818227   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.818245   86402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 12:08:10.940779   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 12:08:10.940803   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.943694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944062   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.944090   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944263   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.944452   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944627   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944767   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.944910   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.945093   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.945110   86402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:11.061924   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:11.061966   86402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:11.062007   86402 buildroot.go:174] setting up certificates
	I1104 12:08:11.062021   86402 provision.go:84] configureAuth start
	I1104 12:08:11.062033   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:11.062293   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.065165   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065559   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.065594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065834   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.068257   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068620   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.068646   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068787   86402 provision.go:143] copyHostCerts
	I1104 12:08:11.068842   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:11.068854   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:11.068904   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:11.068993   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:11.069000   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:11.069019   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:11.069072   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:11.069079   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:11.069097   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:11.069191   86402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 12:08:11.271880   86402 provision.go:177] copyRemoteCerts
	I1104 12:08:11.271946   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:11.271988   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.275023   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275396   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.275428   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275701   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.275905   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.276048   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.276182   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.362968   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:11.388401   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 12:08:11.417180   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:11.439810   86402 provision.go:87] duration metric: took 377.778325ms to configureAuth
	I1104 12:08:11.439841   86402 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:11.440043   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:08:11.440110   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.442476   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.442783   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.442818   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.443005   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.443204   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443329   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.443665   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.443822   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.443837   86402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:11.662212   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:11.662241   86402 machine.go:96] duration metric: took 960.351823ms to provisionDockerMachine
	I1104 12:08:11.662256   86402 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 12:08:11.662269   86402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:11.662289   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.662613   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:11.662642   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.665028   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665391   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.665420   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665598   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.665776   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.665942   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.666064   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.889727   85500 start.go:364] duration metric: took 49.147423989s to acquireMachinesLock for "no-preload-908370"
	I1104 12:08:11.889796   85500 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:08:11.889806   85500 fix.go:54] fixHost starting: 
	I1104 12:08:11.890201   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:11.890229   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:11.906978   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I1104 12:08:11.907524   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:11.907916   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:11.907939   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:11.908319   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:11.908518   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:11.908672   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:11.910182   85500 fix.go:112] recreateIfNeeded on no-preload-908370: state=Stopped err=<nil>
	I1104 12:08:11.910224   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	W1104 12:08:11.910353   85500 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:08:11.912457   85500 out.go:177] * Restarting existing kvm2 VM for "no-preload-908370" ...
	I1104 12:08:11.747199   86402 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:11.751253   86402 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:11.751279   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:11.751356   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:11.751465   86402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:11.751591   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:11.760409   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:11.781890   86402 start.go:296] duration metric: took 119.620604ms for postStartSetup
	I1104 12:08:11.781934   86402 fix.go:56] duration metric: took 19.207938878s for fixHost
	I1104 12:08:11.781960   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.784767   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785058   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.785084   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785300   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.785500   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785644   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785750   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.785877   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.786047   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.786059   86402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:11.889540   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722091.863405264
	
	I1104 12:08:11.889568   86402 fix.go:216] guest clock: 1730722091.863405264
	I1104 12:08:11.889578   86402 fix.go:229] Guest: 2024-11-04 12:08:11.863405264 +0000 UTC Remote: 2024-11-04 12:08:11.781939603 +0000 UTC m=+230.132769870 (delta=81.465661ms)
	I1104 12:08:11.889631   86402 fix.go:200] guest clock delta is within tolerance: 81.465661ms
	I1104 12:08:11.889641   86402 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 19.315682928s
	I1104 12:08:11.889677   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.889975   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.892654   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.892982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.893012   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.893212   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893706   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893888   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893989   86402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:11.894031   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.894074   86402 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:11.894094   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.896812   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897020   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897192   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897454   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897478   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897631   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897646   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897778   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897911   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.897989   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.898083   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.898120   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.998704   86402 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:12.004820   86402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:12.148742   86402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:12.155015   86402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:12.155089   86402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:12.171054   86402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:12.171085   86402 start.go:495] detecting cgroup driver to use...
	I1104 12:08:12.171154   86402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:12.189977   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:12.204622   86402 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:12.204679   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:12.218808   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:12.232276   86402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:12.341220   86402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:12.512813   86402 docker.go:233] disabling docker service ...
	I1104 12:08:12.512893   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:12.526784   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:12.539774   86402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:12.666162   86402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:12.788317   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:12.802703   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:12.820915   86402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 12:08:12.820985   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.831311   86402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:12.831400   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.841625   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.852548   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.864683   86402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:12.876794   86402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:12.886878   86402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:12.886943   86402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:12.902476   86402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:12.914565   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:13.044125   86402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:13.149816   86402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:13.149893   86402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:13.154639   86402 start.go:563] Will wait 60s for crictl version
	I1104 12:08:13.154706   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:13.158788   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:13.200038   86402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:13.200117   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.233501   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.264558   86402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 12:08:11.913730   85500 main.go:141] libmachine: (no-preload-908370) Calling .Start
	I1104 12:08:11.913915   85500 main.go:141] libmachine: (no-preload-908370) Ensuring networks are active...
	I1104 12:08:11.914653   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network default is active
	I1104 12:08:11.915111   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network mk-no-preload-908370 is active
	I1104 12:08:11.915575   85500 main.go:141] libmachine: (no-preload-908370) Getting domain xml...
	I1104 12:08:11.916375   85500 main.go:141] libmachine: (no-preload-908370) Creating domain...
	I1104 12:08:13.289793   85500 main.go:141] libmachine: (no-preload-908370) Waiting to get IP...
	I1104 12:08:13.290880   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.291498   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.291631   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.291463   87562 retry.go:31] will retry after 277.090671ms: waiting for machine to come up
	I1104 12:08:13.570141   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.570726   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.570749   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.570623   87562 retry.go:31] will retry after 259.985785ms: waiting for machine to come up
	I1104 12:08:13.832172   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.832855   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.832898   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.832809   87562 retry.go:31] will retry after 473.426945ms: waiting for machine to come up
	I1104 12:08:14.308725   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.309273   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.309302   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.309249   87562 retry.go:31] will retry after 417.466134ms: waiting for machine to come up
	I1104 12:08:14.727927   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.728388   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.728413   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.728366   87562 retry.go:31] will retry after 734.894622ms: waiting for machine to come up
	I1104 12:08:11.465894   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:13.966921   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:14.465523   86301 node_ready.go:49] node "default-k8s-diff-port-036892" has status "Ready":"True"
	I1104 12:08:14.465545   86301 node_ready.go:38] duration metric: took 7.004111382s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:14.465554   86301 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:14.473334   86301 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482486   86301 pod_ready.go:93] pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:14.482508   86301 pod_ready.go:82] duration metric: took 9.145998ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482518   86301 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:13.351753   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:15.851818   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:13.266087   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:13.269660   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270200   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:13.270233   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270520   86402 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:13.274751   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:13.290348   86402 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:13.290483   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:08:13.290547   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:13.340338   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:13.340426   86402 ssh_runner.go:195] Run: which lz4
	I1104 12:08:13.345147   86402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:08:13.349792   86402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:08:13.349872   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 12:08:14.842720   86402 crio.go:462] duration metric: took 1.497615031s to copy over tarball
	I1104 12:08:14.842791   86402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:08:15.464914   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:15.465510   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:15.465541   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:15.465478   87562 retry.go:31] will retry after 578.01955ms: waiting for machine to come up
	I1104 12:08:16.044861   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:16.045354   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:16.045380   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:16.045313   87562 retry.go:31] will retry after 1.136035438s: waiting for machine to come up
	I1104 12:08:17.182829   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:17.183255   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:17.183282   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:17.183233   87562 retry.go:31] will retry after 1.070971462s: waiting for machine to come up
	I1104 12:08:18.255532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:18.256051   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:18.256078   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:18.256007   87562 retry.go:31] will retry after 1.542250267s: waiting for machine to come up
	I1104 12:08:19.800851   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:19.801298   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:19.801324   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:19.801276   87562 retry.go:31] will retry after 2.127250885s: waiting for machine to come up
	I1104 12:08:16.489394   86301 pod_ready.go:103] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:16.994480   86301 pod_ready.go:93] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:16.994502   86301 pod_ready.go:82] duration metric: took 2.511977586s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:16.994512   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502472   86301 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.502499   86301 pod_ready.go:82] duration metric: took 507.979218ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502513   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507763   86301 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.507785   86301 pod_ready.go:82] duration metric: took 5.264185ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507795   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514017   86301 pod_ready.go:93] pod "kube-proxy-j2srm" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.514045   86301 pod_ready.go:82] duration metric: took 6.241799ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514058   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:19.683083   86301 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.049735   86301 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:20.049759   86301 pod_ready.go:82] duration metric: took 2.535691306s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:20.049772   86301 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:18.749494   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.853448   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:17.837381   86402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994557811s)
	I1104 12:08:17.837410   86402 crio.go:469] duration metric: took 2.994665886s to extract the tarball
	I1104 12:08:17.837420   86402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:08:17.882418   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:17.917035   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:17.917064   86402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:17.917195   86402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.917169   86402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.917164   86402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.917150   86402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.917283   86402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.917254   86402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.918943   86402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 12:08:17.919014   86402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.919025   86402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.070119   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.076604   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.078712   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.083777   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.087827   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.092838   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.110359   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 12:08:18.165523   86402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 12:08:18.165569   86402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.165617   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.213723   86402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 12:08:18.213784   86402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.213833   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.252171   86402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 12:08:18.252221   86402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.252270   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256482   86402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 12:08:18.256522   86402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.256567   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256606   86402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 12:08:18.256564   86402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 12:08:18.256631   86402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.256632   86402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.256632   86402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 12:08:18.256690   86402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 12:08:18.256657   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256703   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.256691   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.256738   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256658   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.264837   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.265836   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.349896   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.349935   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.350014   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.350077   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.368533   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.371302   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.371393   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.496042   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.496121   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.509196   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.509339   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.509247   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.509348   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.513943   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.645867   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 12:08:18.649173   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.649276   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.656159   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.656193   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 12:08:18.660309   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 12:08:18.660384   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 12:08:18.719995   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 12:08:18.720033   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 12:08:18.728304   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 12:08:18.867880   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:19.009342   86402 cache_images.go:92] duration metric: took 1.092257593s to LoadCachedImages
	W1104 12:08:19.009448   86402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1104 12:08:19.009469   86402 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 12:08:19.009590   86402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:19.009671   86402 ssh_runner.go:195] Run: crio config
	I1104 12:08:19.054831   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:08:19.054850   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:19.054863   86402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:19.054880   86402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 12:08:19.055049   86402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:19.055125   86402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 12:08:19.065804   86402 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:19.065888   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:19.075491   86402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 12:08:19.092371   86402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:19.108896   86402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 12:08:19.127622   86402 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:19.131597   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:19.145142   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:19.284780   86402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:19.303843   86402 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 12:08:19.303872   86402 certs.go:194] generating shared ca certs ...
	I1104 12:08:19.303894   86402 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.304084   86402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:19.304148   86402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:19.304161   86402 certs.go:256] generating profile certs ...
	I1104 12:08:19.304280   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 12:08:19.304347   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 12:08:19.304401   86402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 12:08:19.304549   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:19.304590   86402 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:19.304608   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:19.304659   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:19.304702   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:19.304729   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:19.304794   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:19.305479   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:19.341333   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:19.375179   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:19.410128   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:19.452565   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 12:08:19.493404   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:08:19.521178   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:19.550524   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:08:19.574903   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:19.599308   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:19.627107   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:19.657121   86402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:19.679087   86402 ssh_runner.go:195] Run: openssl version
	I1104 12:08:19.687115   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:19.702537   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707340   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707408   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.714955   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:19.727883   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:19.739690   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744600   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744656   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.750324   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:19.760988   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:19.772634   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777504   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777580   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.783660   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:19.795483   86402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:19.800327   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:19.806346   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:19.813920   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:19.820358   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:19.826359   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:19.832467   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:19.838902   86402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:19.839018   86402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:19.839075   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.880407   86402 cri.go:89] found id: ""
	I1104 12:08:19.880486   86402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:19.891135   86402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:19.891156   86402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:19.891219   86402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:19.901437   86402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:19.902325   86402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:19.902941   86402 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-589257" cluster setting kubeconfig missing "old-k8s-version-589257" context setting]
	I1104 12:08:19.903879   86402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.937877   86402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:19.948669   86402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.180
	I1104 12:08:19.948701   86402 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:19.948711   86402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:19.948752   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.988249   86402 cri.go:89] found id: ""
	I1104 12:08:19.988344   86402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:20.006949   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:20.020677   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:20.020700   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:20.020747   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:20.031509   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:20.031566   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:20.042229   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:20.054695   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:20.054810   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:20.067410   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.078639   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:20.078711   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.091357   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:20.100986   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:20.101071   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:20.110345   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:20.119778   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:20.281637   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.006838   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.234671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.335720   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.437522   86402 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:21.437615   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:21.929963   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:21.930522   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:21.930552   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:21.930461   87562 retry.go:31] will retry after 2.171964123s: waiting for machine to come up
	I1104 12:08:24.103844   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:24.104303   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:24.104326   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:24.104257   87562 retry.go:31] will retry after 2.838813818s: waiting for machine to come up
	I1104 12:08:22.056858   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:24.057127   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:23.351405   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:25.850834   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:21.938086   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.438198   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.938624   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.438021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.938119   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.438470   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.937687   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.438045   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.937696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.438585   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.944977   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:26.945367   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:26.945395   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:26.945349   87562 retry.go:31] will retry after 2.799785534s: waiting for machine to come up
	I1104 12:08:29.746349   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746747   85500 main.go:141] libmachine: (no-preload-908370) Found IP for machine: 192.168.61.91
	I1104 12:08:29.746774   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has current primary IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746779   85500 main.go:141] libmachine: (no-preload-908370) Reserving static IP address...
	I1104 12:08:29.747195   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.747218   85500 main.go:141] libmachine: (no-preload-908370) Reserved static IP address: 192.168.61.91
	I1104 12:08:29.747234   85500 main.go:141] libmachine: (no-preload-908370) DBG | skip adding static IP to network mk-no-preload-908370 - found existing host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"}
	I1104 12:08:29.747248   85500 main.go:141] libmachine: (no-preload-908370) DBG | Getting to WaitForSSH function...
	I1104 12:08:29.747258   85500 main.go:141] libmachine: (no-preload-908370) Waiting for SSH to be available...
	I1104 12:08:29.749405   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749694   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.749728   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749887   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH client type: external
	I1104 12:08:29.749908   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa (-rw-------)
	I1104 12:08:29.749933   85500 main.go:141] libmachine: (no-preload-908370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:29.749951   85500 main.go:141] libmachine: (no-preload-908370) DBG | About to run SSH command:
	I1104 12:08:29.749966   85500 main.go:141] libmachine: (no-preload-908370) DBG | exit 0
	I1104 12:08:29.873121   85500 main.go:141] libmachine: (no-preload-908370) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:29.873472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetConfigRaw
	I1104 12:08:29.874081   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:29.876737   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877127   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.877155   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877473   85500 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/config.json ...
	I1104 12:08:29.877717   85500 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:29.877740   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:29.877936   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.880272   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880522   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.880543   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.880883   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881048   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.881338   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.881511   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.881524   85500 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:29.989431   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:29.989460   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989725   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:08:29.989757   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989974   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.992679   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993028   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.993057   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993222   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.993425   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993553   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993683   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.993817   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.994000   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.994016   85500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-908370 && echo "no-preload-908370" | sudo tee /etc/hostname
	I1104 12:08:30.118321   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-908370
	
	I1104 12:08:30.118361   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.121095   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121475   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.121509   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121697   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.121866   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122040   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122176   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.122343   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.122525   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.122547   85500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-908370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-908370/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-908370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:26.557368   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:29.056377   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:28.349510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:30.350431   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:26.937831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.938240   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.438463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.937958   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.437676   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.938298   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.937953   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.438075   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.237340   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:30.237370   85500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:30.237413   85500 buildroot.go:174] setting up certificates
	I1104 12:08:30.237429   85500 provision.go:84] configureAuth start
	I1104 12:08:30.237446   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:30.237725   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:30.240026   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240350   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.240380   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.242777   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243101   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.243119   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243302   85500 provision.go:143] copyHostCerts
	I1104 12:08:30.243358   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:30.243368   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:30.243427   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:30.243532   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:30.243542   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:30.243565   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:30.243635   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:30.243643   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:30.243661   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:30.243719   85500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.no-preload-908370 san=[127.0.0.1 192.168.61.91 localhost minikube no-preload-908370]
	I1104 12:08:30.515270   85500 provision.go:177] copyRemoteCerts
	I1104 12:08:30.515350   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:30.515381   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.518651   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519188   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.519218   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519420   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.519600   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.519777   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.519896   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:30.603170   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:30.626226   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:30.649353   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:08:30.684759   85500 provision.go:87] duration metric: took 447.313588ms to configureAuth
	I1104 12:08:30.684789   85500 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:30.684962   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:30.685029   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.687429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.687815   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.687840   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.688015   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.688192   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688325   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688471   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.688640   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.688830   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.688848   85500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:30.919118   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:30.919142   85500 machine.go:96] duration metric: took 1.041410402s to provisionDockerMachine
	I1104 12:08:30.919156   85500 start.go:293] postStartSetup for "no-preload-908370" (driver="kvm2")
	I1104 12:08:30.919169   85500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:30.919200   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:30.919513   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:30.919538   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.922075   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922485   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.922510   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922615   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.922823   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.922991   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.923107   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.007598   85500 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:31.011558   85500 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:31.011588   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:31.011665   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:31.011766   85500 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:31.011859   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:31.020788   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:31.044379   85500 start.go:296] duration metric: took 125.209775ms for postStartSetup
	I1104 12:08:31.044414   85500 fix.go:56] duration metric: took 19.154609071s for fixHost
	I1104 12:08:31.044442   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.047152   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047426   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.047461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047639   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.047829   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.047976   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.048138   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.048296   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:31.048464   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:31.048474   85500 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:31.157723   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722111.115015995
	
	I1104 12:08:31.157747   85500 fix.go:216] guest clock: 1730722111.115015995
	I1104 12:08:31.157758   85500 fix.go:229] Guest: 2024-11-04 12:08:31.115015995 +0000 UTC Remote: 2024-11-04 12:08:31.044427312 +0000 UTC m=+350.890212897 (delta=70.588683ms)
	I1104 12:08:31.157829   85500 fix.go:200] guest clock delta is within tolerance: 70.588683ms
	I1104 12:08:31.157841   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 19.268070408s
	I1104 12:08:31.157875   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.158131   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:31.160806   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161159   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.161191   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161371   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.161907   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162092   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162174   85500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:31.162217   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.162444   85500 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:31.162470   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.165069   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165316   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165505   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165656   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.165771   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165795   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165842   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166006   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.166024   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166183   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.166327   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166449   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.267746   85500 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:31.273307   85500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:31.410198   85500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:31.416652   85500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:31.416726   85500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:31.432260   85500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:31.432288   85500 start.go:495] detecting cgroup driver to use...
	I1104 12:08:31.432345   85500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:31.453134   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:31.467457   85500 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:31.467516   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:31.481392   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:31.495740   85500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:31.617549   85500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:31.802455   85500 docker.go:233] disabling docker service ...
	I1104 12:08:31.802511   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:31.815534   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:31.827495   85500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:31.938344   85500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:32.042827   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:32.056126   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:32.074274   85500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:08:32.074337   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.084061   85500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:32.084138   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.093533   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.104351   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.113753   85500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:32.123391   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.133089   85500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.149073   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.159888   85500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:32.169208   85500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:32.169279   85500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:32.181319   85500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:32.192472   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:32.300710   85500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:32.386906   85500 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:32.386980   85500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:32.391498   85500 start.go:563] Will wait 60s for crictl version
	I1104 12:08:32.391554   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.395471   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:32.439094   85500 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:32.439168   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.466609   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.499305   85500 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:08:32.500825   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:32.503461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.503827   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:32.503857   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.504039   85500 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:32.508082   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:32.520202   85500 kubeadm.go:883] updating cluster {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:32.520359   85500 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:08:32.520402   85500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:32.553752   85500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:08:32.553781   85500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.553868   85500 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.553853   85500 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.553886   85500 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1104 12:08:32.553925   85500 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.553969   85500 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.553978   85500 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555506   85500 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.555518   85500 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.555510   85500 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.555513   85500 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555591   85500 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.555601   85500 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.555514   85500 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.555658   85500 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1104 12:08:32.706982   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.707334   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.712904   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.721917   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.727829   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.741130   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1104 12:08:32.743716   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.796406   85500 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1104 12:08:32.796448   85500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.796502   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.814658   85500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1104 12:08:32.814697   85500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.814735   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.828308   85500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1104 12:08:32.828362   85500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.828416   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.882090   85500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1104 12:08:32.882140   85500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.882205   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.886473   85500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1104 12:08:32.886518   85500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.886567   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956331   85500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1104 12:08:32.956394   85500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.956414   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.956462   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.956427   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.956521   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.956425   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956506   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061683   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.061723   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061752   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.061790   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.061836   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.061893   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168519   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168596   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.187540   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.188933   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.189015   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.199281   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.285086   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1104 12:08:33.285145   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1104 12:08:33.285245   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:33.285247   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.307647   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1104 12:08:33.307769   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:33.307784   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1104 12:08:33.307818   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.307869   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:33.312697   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1104 12:08:33.312808   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:33.314341   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1104 12:08:33.314358   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314396   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314535   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1104 12:08:33.319449   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1104 12:08:33.319604   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1104 12:08:33.356390   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1104 12:08:33.356478   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1104 12:08:33.356569   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:33.512915   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:31.057314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:33.059599   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:32.350656   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:34.352338   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:31.938577   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.438561   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.938188   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.437856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.938433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.438381   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.938164   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.438120   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.937802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.438365   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.736963   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.42254522s)
	I1104 12:08:35.736994   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1104 12:08:35.737014   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737027   85500 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.380435224s)
	I1104 12:08:35.737058   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1104 12:08:35.737063   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737104   85500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.224165247s)
	I1104 12:08:35.737156   85500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1104 12:08:35.737191   85500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.737267   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:37.693026   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.955928101s)
	I1104 12:08:37.693065   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1104 12:08:37.693086   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:37.693047   85500 ssh_runner.go:235] Completed: which crictl: (1.955763498s)
	I1104 12:08:37.693168   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:37.693131   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:39.156860   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.463570619s)
	I1104 12:08:39.156894   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1104 12:08:39.156922   85500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156930   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.463741565s)
	I1104 12:08:39.156980   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156998   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.625930   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.057567   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.850619   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.851157   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:40.852272   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.938295   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.437646   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.438623   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.938662   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.938048   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.438404   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.938494   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.437875   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.701724   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.544718982s)
	I1104 12:08:42.701751   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1104 12:08:42.701771   85500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701810   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701826   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.544784275s)
	I1104 12:08:42.701912   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:44.666599   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.964646885s)
	I1104 12:08:44.666653   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1104 12:08:44.666723   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.964896366s)
	I1104 12:08:44.666744   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1104 12:08:44.666748   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:44.666765   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.666807   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.671475   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1104 12:08:40.556827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:42.557662   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.058481   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:43.351505   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.851360   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:41.938001   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.438702   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.938239   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.438469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.437744   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.938478   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.437757   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.938035   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.438173   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.627407   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.960571593s)
	I1104 12:08:46.627437   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1104 12:08:46.627473   85500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:46.627537   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:47.273537   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1104 12:08:47.273578   85500 cache_images.go:123] Successfully loaded all cached images
	I1104 12:08:47.273583   85500 cache_images.go:92] duration metric: took 14.719789832s to LoadCachedImages
	I1104 12:08:47.273594   85500 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.31.2 crio true true} ...
	I1104 12:08:47.273686   85500 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-908370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:47.273747   85500 ssh_runner.go:195] Run: crio config
	I1104 12:08:47.319888   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:47.319916   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:47.319929   85500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:47.319952   85500 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-908370 NodeName:no-preload-908370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:08:47.320098   85500 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-908370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:47.320185   85500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:08:47.330284   85500 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:47.330352   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:47.340015   85500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1104 12:08:47.356601   85500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:47.371327   85500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1104 12:08:47.387251   85500 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:47.391041   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:47.402283   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:47.527723   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:47.544017   85500 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370 for IP: 192.168.61.91
	I1104 12:08:47.544041   85500 certs.go:194] generating shared ca certs ...
	I1104 12:08:47.544060   85500 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:47.544244   85500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:47.544309   85500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:47.544322   85500 certs.go:256] generating profile certs ...
	I1104 12:08:47.544412   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.key
	I1104 12:08:47.544485   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key.890cb7f7
	I1104 12:08:47.544522   85500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key
	I1104 12:08:47.544626   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:47.544654   85500 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:47.544663   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:47.544685   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:47.544706   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:47.544726   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:47.544774   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:47.545439   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:47.588488   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:47.631341   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:47.666571   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:47.698703   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 12:08:47.725285   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:08:47.748890   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:47.775589   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:08:47.799507   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:47.823383   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:47.847515   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:47.869937   85500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:47.886413   85500 ssh_runner.go:195] Run: openssl version
	I1104 12:08:47.892041   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:47.901942   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906128   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906182   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.911506   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:47.921614   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:47.932358   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936742   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936801   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.942544   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:47.953063   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:47.963293   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967487   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967547   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.972898   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:47.983089   85500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:47.987532   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:47.993296   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:47.999021   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:48.004741   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:48.010227   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:48.015795   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:48.021356   85500 kubeadm.go:392] StartCluster: {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:48.021431   85500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:48.021471   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.057729   85500 cri.go:89] found id: ""
	I1104 12:08:48.057805   85500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:48.067591   85500 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:48.067610   85500 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:48.067663   85500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:48.076604   85500 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:48.077987   85500 kubeconfig.go:125] found "no-preload-908370" server: "https://192.168.61.91:8443"
	I1104 12:08:48.080042   85500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:48.089796   85500 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.91
	I1104 12:08:48.089826   85500 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:48.089838   85500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:48.089886   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.126920   85500 cri.go:89] found id: ""
	I1104 12:08:48.126998   85500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:48.143409   85500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:48.152783   85500 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:48.152809   85500 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:48.152858   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:48.161458   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:48.161542   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:48.170361   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:48.179217   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:48.179272   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:48.187834   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.196025   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:48.196079   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.204809   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:48.213280   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:48.213338   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:48.222672   85500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:48.232374   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:48.328999   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:49.920988   85500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.591954434s)
	I1104 12:08:49.921028   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.121679   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.181412   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:47.558137   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:49.559576   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:48.349974   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:50.350855   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:46.938016   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.438229   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.437950   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.437785   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.438413   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.938514   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.438658   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.253614   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:50.253693   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.754467   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.254553   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.271229   85500 api_server.go:72] duration metric: took 1.017613016s to wait for apiserver process to appear ...
	I1104 12:08:51.271255   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:51.271278   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:51.271794   85500 api_server.go:269] stopped: https://192.168.61.91:8443/healthz: Get "https://192.168.61.91:8443/healthz": dial tcp 192.168.61.91:8443: connect: connection refused
	I1104 12:08:51.771551   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.499268   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:54.499296   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:54.499310   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.617672   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.617699   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:54.771942   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.776588   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.776615   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:52.056678   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.057081   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:55.272332   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.276594   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:55.276621   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:55.771423   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.776881   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:08:55.783842   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:55.783869   85500 api_server.go:131] duration metric: took 4.512606898s to wait for apiserver health ...
	I1104 12:08:55.783877   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:55.783883   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:55.785665   85500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:52.351019   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.850354   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:51.938323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.438464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.937754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.938586   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.438391   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.938546   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.438433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.787083   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:55.801764   85500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:55.828371   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:55.847602   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:55.847653   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:55.847666   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:55.847679   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:55.847695   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:55.847707   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:55.847724   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:55.847733   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:55.847743   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:55.847753   85500 system_pods.go:74] duration metric: took 19.357387ms to wait for pod list to return data ...
	I1104 12:08:55.847762   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:55.856783   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:55.856820   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:55.856834   85500 node_conditions.go:105] duration metric: took 9.065755ms to run NodePressure ...
	I1104 12:08:55.856856   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:56.143012   85500 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148006   85500 kubeadm.go:739] kubelet initialised
	I1104 12:08:56.148026   85500 kubeadm.go:740] duration metric: took 4.987292ms waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148034   85500 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:56.152359   85500 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.156700   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156725   85500 pod_ready.go:82] duration metric: took 4.341093ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.156734   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156741   85500 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.161402   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161431   85500 pod_ready.go:82] duration metric: took 4.681838ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.161440   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161447   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.165738   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165756   85500 pod_ready.go:82] duration metric: took 4.301197ms for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.165764   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165770   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.232568   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232598   85500 pod_ready.go:82] duration metric: took 66.818411ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.232610   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232620   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.633774   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633804   85500 pod_ready.go:82] duration metric: took 401.173552ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.633815   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633824   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.032392   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032419   85500 pod_ready.go:82] duration metric: took 398.58729ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.032431   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032439   85500 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.431940   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431976   85500 pod_ready.go:82] duration metric: took 399.525162ms for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.431987   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431997   85500 pod_ready.go:39] duration metric: took 1.283953089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:57.432014   85500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:57.444821   85500 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:57.444845   85500 kubeadm.go:597] duration metric: took 9.377227288s to restartPrimaryControlPlane
	I1104 12:08:57.444857   85500 kubeadm.go:394] duration metric: took 9.423506415s to StartCluster
	I1104 12:08:57.444879   85500 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.444965   85500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:57.446715   85500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.446981   85500 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:57.447059   85500 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:57.447172   85500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-908370"
	I1104 12:08:57.447193   85500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-908370"
	W1104 12:08:57.447202   85500 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:57.447207   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:57.447237   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447234   85500 addons.go:69] Setting default-storageclass=true in profile "no-preload-908370"
	I1104 12:08:57.447321   85500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-908370"
	I1104 12:08:57.447222   85500 addons.go:69] Setting metrics-server=true in profile "no-preload-908370"
	I1104 12:08:57.447418   85500 addons.go:234] Setting addon metrics-server=true in "no-preload-908370"
	W1104 12:08:57.447431   85500 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:57.447461   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447708   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447792   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447813   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447748   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447896   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447853   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.449013   85500 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:57.450774   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:57.469657   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I1104 12:08:57.470180   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.470801   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.470830   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.471277   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.471873   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.471924   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.485026   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1104 12:08:57.485330   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1104 12:08:57.485604   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.485772   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.486328   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486363   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486442   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486471   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486735   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.486847   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.487059   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.487337   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.487401   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.490138   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I1104 12:08:57.490611   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.490705   85500 addons.go:234] Setting addon default-storageclass=true in "no-preload-908370"
	W1104 12:08:57.490724   85500 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:57.490748   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.491098   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.491140   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.491153   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.491177   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.491549   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.491718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.493600   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.495883   85500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:57.497200   85500 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.497217   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:57.497245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.500402   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.500934   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.500960   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.501276   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.501483   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.501626   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.501775   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.508615   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I1104 12:08:57.509102   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.509582   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.509606   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.509948   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.510115   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.510809   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1104 12:08:57.511134   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.511818   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.511836   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.511868   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.512486   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.513456   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.513500   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.513921   85500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:57.515417   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:57.515434   85500 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:57.515461   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.518867   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519216   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.519241   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519334   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.519523   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.519662   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.520124   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.529448   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I1104 12:08:57.529979   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.530374   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.530389   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.530756   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.530889   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.532430   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.532832   85500 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.532843   85500 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:57.532857   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.535429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535783   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.535809   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535953   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.536148   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.536245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.536388   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.635571   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:57.654984   85500 node_ready.go:35] waiting up to 6m0s for node "no-preload-908370" to be "Ready" ...
	I1104 12:08:57.722564   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.768850   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.791069   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:57.791090   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:57.875966   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:57.875997   85500 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:57.929834   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:57.929867   85500 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:58.017927   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:58.732204   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732235   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732586   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.732614   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.732624   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732635   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732640   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.733045   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.733108   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.733084   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.736737   85500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014142064s)
	I1104 12:08:58.736783   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.736793   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737035   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737077   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.737090   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.737100   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737737   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.737756   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737770   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.740716   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.740735   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.740963   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.740974   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.740985   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987200   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987227   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987657   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.987667   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.987676   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987685   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987708   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987991   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.988006   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.988018   85500 addons.go:475] Verifying addon metrics-server=true in "no-preload-908370"
	I1104 12:08:58.989756   85500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:58.991022   85500 addons.go:510] duration metric: took 1.54397104s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:59.659284   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.057497   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.057767   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.850793   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.852058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.938312   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.437920   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.937779   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.438511   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.938464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.438108   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.438356   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.158318   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:04.658719   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:05.159526   85500 node_ready.go:49] node "no-preload-908370" has status "Ready":"True"
	I1104 12:09:05.159553   85500 node_ready.go:38] duration metric: took 7.504528904s for node "no-preload-908370" to be "Ready" ...
	I1104 12:09:05.159564   85500 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:09:05.164838   85500 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173888   85500 pod_ready.go:93] pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.173909   85500 pod_ready.go:82] duration metric: took 9.046581ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173919   85500 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:00.556225   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:02.556893   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:05.055827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.351472   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:03.851990   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.938694   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.938445   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.438137   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.937941   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.937760   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.438704   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.680754   85500 pod_ready.go:93] pod "etcd-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.680778   85500 pod_ready.go:82] duration metric: took 506.849735ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.680804   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:07.687108   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:09.687377   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:07.556024   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.055613   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.351230   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:08.351640   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.850364   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.937956   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.438323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.438437   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.937675   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.437868   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.938703   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.438436   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.687315   85500 pod_ready.go:93] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.687338   85500 pod_ready.go:82] duration metric: took 5.006527478s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.687348   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692554   85500 pod_ready.go:93] pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.692583   85500 pod_ready.go:82] duration metric: took 5.227048ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692597   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697109   85500 pod_ready.go:93] pod "kube-proxy-w9hbz" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.697132   85500 pod_ready.go:82] duration metric: took 4.525205ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697153   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701450   85500 pod_ready.go:93] pod "kube-scheduler-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.701472   85500 pod_ready.go:82] duration metric: took 4.310973ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701483   85500 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:12.708631   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.708772   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.056161   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.556380   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.850721   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.851608   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:11.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.437963   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.938515   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.437754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.937856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.438729   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.938439   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.438421   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.938044   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.438456   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.209025   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.707595   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.056226   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.555918   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.350266   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.350329   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:16.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.438266   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.938153   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.437829   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.938469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.438336   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.938284   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.438073   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.937894   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:21.438135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:21.438238   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:21.471463   86402 cri.go:89] found id: ""
	I1104 12:09:21.471495   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.471507   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:21.471515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:21.471568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:21.509336   86402 cri.go:89] found id: ""
	I1104 12:09:21.509363   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.509373   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:21.509381   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:21.509441   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:21.545963   86402 cri.go:89] found id: ""
	I1104 12:09:21.545987   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.545995   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:21.546000   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:21.546059   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:21.580707   86402 cri.go:89] found id: ""
	I1104 12:09:21.580737   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.580748   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:21.580755   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:21.580820   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:21.613763   86402 cri.go:89] found id: ""
	I1104 12:09:21.613791   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.613801   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:21.613809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:21.613872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:21.646559   86402 cri.go:89] found id: ""
	I1104 12:09:21.646583   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.646591   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:21.646597   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:21.646643   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:21.681439   86402 cri.go:89] found id: ""
	I1104 12:09:21.681467   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.681479   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:21.681486   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:21.681554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:21.708045   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.207686   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:22.055637   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.056458   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.350636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:23.850852   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.713875   86402 cri.go:89] found id: ""
	I1104 12:09:21.713899   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.713907   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:21.713915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:21.713925   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:21.763882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:21.763918   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:21.778590   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:21.778615   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:21.892208   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:21.892235   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:21.892250   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:21.965946   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:21.965984   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:24.502992   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:24.514899   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:24.514960   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:24.554466   86402 cri.go:89] found id: ""
	I1104 12:09:24.554491   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.554501   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:24.554510   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:24.554567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:24.591532   86402 cri.go:89] found id: ""
	I1104 12:09:24.591560   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.591572   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:24.591580   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:24.591638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:24.625436   86402 cri.go:89] found id: ""
	I1104 12:09:24.625467   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.625478   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:24.625485   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:24.625544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:24.658317   86402 cri.go:89] found id: ""
	I1104 12:09:24.658346   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.658357   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:24.658364   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:24.658426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:24.692811   86402 cri.go:89] found id: ""
	I1104 12:09:24.692839   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.692850   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:24.692857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:24.692917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:24.729677   86402 cri.go:89] found id: ""
	I1104 12:09:24.729708   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.729719   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:24.729726   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:24.729773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:24.768575   86402 cri.go:89] found id: ""
	I1104 12:09:24.768598   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.768608   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:24.768615   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:24.768681   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:24.802344   86402 cri.go:89] found id: ""
	I1104 12:09:24.802368   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.802375   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:24.802383   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:24.802394   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:24.855882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:24.855915   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:24.869199   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:24.869243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:24.940720   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:24.940744   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:24.940758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:25.016139   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:25.016177   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:26.208422   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.208568   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.557513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:29.055769   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.350171   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.353001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:30.851153   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:27.553297   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:27.566857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:27.566913   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:27.599606   86402 cri.go:89] found id: ""
	I1104 12:09:27.599641   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.599653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:27.599661   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:27.599721   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:27.633818   86402 cri.go:89] found id: ""
	I1104 12:09:27.633841   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.633849   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:27.633854   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:27.633907   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:27.668088   86402 cri.go:89] found id: ""
	I1104 12:09:27.668120   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.668129   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:27.668135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:27.668185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:27.699401   86402 cri.go:89] found id: ""
	I1104 12:09:27.699433   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.699445   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:27.699453   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:27.699511   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:27.731422   86402 cri.go:89] found id: ""
	I1104 12:09:27.731448   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.731459   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:27.731466   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:27.731528   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:27.762808   86402 cri.go:89] found id: ""
	I1104 12:09:27.762839   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.762850   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:27.762857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:27.762917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:27.794729   86402 cri.go:89] found id: ""
	I1104 12:09:27.794757   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.794765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:27.794771   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:27.794826   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:27.825694   86402 cri.go:89] found id: ""
	I1104 12:09:27.825716   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.825724   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:27.825731   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:27.825742   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.862111   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:27.862140   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:27.911169   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:27.911204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:27.924207   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:27.924232   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:27.995123   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:27.995153   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:27.995167   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:30.580831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:30.594901   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:30.594959   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:30.630936   86402 cri.go:89] found id: ""
	I1104 12:09:30.630961   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.630971   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:30.630979   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:30.631034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:30.669288   86402 cri.go:89] found id: ""
	I1104 12:09:30.669311   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.669320   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:30.669328   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:30.669388   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:30.706288   86402 cri.go:89] found id: ""
	I1104 12:09:30.706312   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.706319   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:30.706325   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:30.706384   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:30.739027   86402 cri.go:89] found id: ""
	I1104 12:09:30.739057   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.739069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:30.739078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:30.739137   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:30.772247   86402 cri.go:89] found id: ""
	I1104 12:09:30.772272   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.772280   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:30.772286   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:30.772338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:30.810327   86402 cri.go:89] found id: ""
	I1104 12:09:30.810360   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.810370   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:30.810375   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:30.810426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:30.842241   86402 cri.go:89] found id: ""
	I1104 12:09:30.842271   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.842279   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:30.842285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:30.842332   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:30.877003   86402 cri.go:89] found id: ""
	I1104 12:09:30.877032   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.877043   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:30.877052   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:30.877077   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:30.925783   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:30.925816   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:30.939651   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:30.939680   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:31.029176   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:31.029210   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:31.029244   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:31.116311   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:31.116348   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:30.708451   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:32.708661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:31.056627   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.056743   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.057986   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.350420   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.351206   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.653267   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:33.665813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:33.665878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:33.701812   86402 cri.go:89] found id: ""
	I1104 12:09:33.701839   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.701852   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:33.701860   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:33.701922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:33.738816   86402 cri.go:89] found id: ""
	I1104 12:09:33.738850   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.738861   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:33.738868   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:33.738928   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:33.773936   86402 cri.go:89] found id: ""
	I1104 12:09:33.773960   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.773968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:33.773976   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:33.774031   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:33.808049   86402 cri.go:89] found id: ""
	I1104 12:09:33.808079   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.808091   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:33.808098   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:33.808154   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:33.844276   86402 cri.go:89] found id: ""
	I1104 12:09:33.844303   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.844314   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:33.844322   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:33.844443   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:33.879736   86402 cri.go:89] found id: ""
	I1104 12:09:33.879772   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.879782   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:33.879788   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:33.879843   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:33.913717   86402 cri.go:89] found id: ""
	I1104 12:09:33.913750   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.913761   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:33.913769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:33.913832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:33.949632   86402 cri.go:89] found id: ""
	I1104 12:09:33.949658   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.949667   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:33.949677   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:33.949691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:34.019770   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:34.019790   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:34.019806   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:34.101493   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:34.101524   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:34.146723   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:34.146751   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:34.196295   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:34.196338   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:35.207223   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.207576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.208091   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.556228   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.556548   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.850907   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.852870   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:36.709951   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:36.724723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:36.724782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:36.777406   86402 cri.go:89] found id: ""
	I1104 12:09:36.777440   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.777451   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:36.777459   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:36.777520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:36.834486   86402 cri.go:89] found id: ""
	I1104 12:09:36.834516   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.834527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:36.834535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:36.834641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:36.868828   86402 cri.go:89] found id: ""
	I1104 12:09:36.868853   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.868861   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:36.868867   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:36.868912   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:36.900942   86402 cri.go:89] found id: ""
	I1104 12:09:36.900972   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.900980   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:36.900986   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:36.901043   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:36.933215   86402 cri.go:89] found id: ""
	I1104 12:09:36.933265   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.933276   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:36.933282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:36.933330   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:36.966753   86402 cri.go:89] found id: ""
	I1104 12:09:36.966776   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.966784   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:36.966789   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:36.966850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:37.000050   86402 cri.go:89] found id: ""
	I1104 12:09:37.000074   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.000082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:37.000087   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:37.000144   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:37.033252   86402 cri.go:89] found id: ""
	I1104 12:09:37.033283   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.033295   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:37.033305   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:37.033328   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:37.085351   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:37.085383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:37.098556   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:37.098582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:37.167489   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:37.167512   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:37.167525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:37.243292   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:37.243325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:39.781468   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:39.795630   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:39.795756   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:39.833745   86402 cri.go:89] found id: ""
	I1104 12:09:39.833779   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.833791   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:39.833798   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:39.833862   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:39.870075   86402 cri.go:89] found id: ""
	I1104 12:09:39.870096   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.870106   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:39.870119   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:39.870173   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:39.905807   86402 cri.go:89] found id: ""
	I1104 12:09:39.905836   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.905846   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:39.905854   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:39.905916   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:39.941890   86402 cri.go:89] found id: ""
	I1104 12:09:39.941914   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.941922   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:39.941932   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:39.941978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:39.979123   86402 cri.go:89] found id: ""
	I1104 12:09:39.979150   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.979159   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:39.979165   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:39.979220   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:40.014748   86402 cri.go:89] found id: ""
	I1104 12:09:40.014777   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.014785   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:40.014791   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:40.014882   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:40.049977   86402 cri.go:89] found id: ""
	I1104 12:09:40.050004   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.050014   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:40.050021   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:40.050100   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:40.085630   86402 cri.go:89] found id: ""
	I1104 12:09:40.085663   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.085674   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:40.085685   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:40.085701   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:40.166611   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:40.166650   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:40.203117   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:40.203155   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:40.256233   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:40.256267   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:40.270009   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:40.270042   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:40.338672   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:41.707618   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:43.708915   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.055555   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.060949   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.351562   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.851599   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.839402   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:42.852881   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:42.852947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:42.884587   86402 cri.go:89] found id: ""
	I1104 12:09:42.884614   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.884624   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:42.884631   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:42.884690   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:42.915286   86402 cri.go:89] found id: ""
	I1104 12:09:42.915316   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.915327   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:42.915337   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:42.915399   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:42.945827   86402 cri.go:89] found id: ""
	I1104 12:09:42.945857   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.945868   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:42.945875   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:42.945934   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:42.982662   86402 cri.go:89] found id: ""
	I1104 12:09:42.982693   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.982703   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:42.982712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:42.982788   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:43.015337   86402 cri.go:89] found id: ""
	I1104 12:09:43.015371   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.015382   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:43.015390   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:43.015453   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:43.048235   86402 cri.go:89] found id: ""
	I1104 12:09:43.048262   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.048270   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:43.048276   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:43.048351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:43.080636   86402 cri.go:89] found id: ""
	I1104 12:09:43.080668   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.080679   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:43.080687   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:43.080746   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:43.113986   86402 cri.go:89] found id: ""
	I1104 12:09:43.114011   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.114019   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:43.114027   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:43.114038   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:43.165356   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:43.165390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:43.179167   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:43.179200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:43.250054   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:43.250083   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:43.250098   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:43.328970   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:43.329002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:45.869879   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:45.883262   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:45.883359   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:45.921978   86402 cri.go:89] found id: ""
	I1104 12:09:45.922003   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.922011   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:45.922016   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:45.922076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:45.954668   86402 cri.go:89] found id: ""
	I1104 12:09:45.954697   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.954710   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:45.954717   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:45.954787   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:45.987793   86402 cri.go:89] found id: ""
	I1104 12:09:45.987826   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.987837   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:45.987845   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:45.987906   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:46.028517   86402 cri.go:89] found id: ""
	I1104 12:09:46.028550   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.028558   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:46.028563   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:46.028621   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:46.063832   86402 cri.go:89] found id: ""
	I1104 12:09:46.063859   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.063870   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:46.063878   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:46.063942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:46.099981   86402 cri.go:89] found id: ""
	I1104 12:09:46.100011   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.100027   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:46.100036   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:46.100169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:46.133060   86402 cri.go:89] found id: ""
	I1104 12:09:46.133083   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.133092   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:46.133099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:46.133165   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:46.170559   86402 cri.go:89] found id: ""
	I1104 12:09:46.170583   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.170591   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:46.170599   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:46.170610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:46.253202   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:46.253253   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:46.288468   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:46.288498   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:46.339322   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:46.339354   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:46.353020   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:46.353049   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:46.420328   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:46.208695   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:46.556598   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.057461   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:47.351225   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.352737   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.920709   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:48.933443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:48.933507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:48.964736   86402 cri.go:89] found id: ""
	I1104 12:09:48.964759   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.964770   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:48.964777   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:48.964837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:48.996646   86402 cri.go:89] found id: ""
	I1104 12:09:48.996670   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.996679   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:48.996684   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:48.996734   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:49.028899   86402 cri.go:89] found id: ""
	I1104 12:09:49.028942   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.028951   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:49.028957   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:49.029015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:49.065032   86402 cri.go:89] found id: ""
	I1104 12:09:49.065056   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.065064   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:49.065075   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:49.065120   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:49.097159   86402 cri.go:89] found id: ""
	I1104 12:09:49.097183   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.097191   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:49.097196   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:49.097269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:49.131578   86402 cri.go:89] found id: ""
	I1104 12:09:49.131608   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.131619   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:49.131626   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:49.131684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:49.164307   86402 cri.go:89] found id: ""
	I1104 12:09:49.164339   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.164358   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:49.164367   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:49.164430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:49.197171   86402 cri.go:89] found id: ""
	I1104 12:09:49.197199   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.197210   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:49.197220   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:49.197251   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:49.210327   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:49.210355   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:49.280226   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:49.280251   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:49.280262   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:49.367655   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:49.367691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:49.408424   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:49.408452   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:50.708963   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:53.207337   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.555800   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.055622   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.850949   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.350551   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.958148   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:51.970451   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:51.970521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:52.000896   86402 cri.go:89] found id: ""
	I1104 12:09:52.000929   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.000940   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:52.000948   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:52.001023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:52.034122   86402 cri.go:89] found id: ""
	I1104 12:09:52.034150   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.034161   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:52.034168   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:52.034227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:52.070834   86402 cri.go:89] found id: ""
	I1104 12:09:52.070872   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.070884   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:52.070891   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:52.070950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:52.103730   86402 cri.go:89] found id: ""
	I1104 12:09:52.103758   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.103766   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:52.103772   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:52.103832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:52.135980   86402 cri.go:89] found id: ""
	I1104 12:09:52.136006   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.136014   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:52.136020   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:52.136081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:52.168903   86402 cri.go:89] found id: ""
	I1104 12:09:52.168928   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.168936   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:52.168942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:52.169001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:52.199499   86402 cri.go:89] found id: ""
	I1104 12:09:52.199529   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.199539   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:52.199546   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:52.199610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:52.232566   86402 cri.go:89] found id: ""
	I1104 12:09:52.232603   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.232615   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:52.232626   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:52.232640   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:52.282140   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:52.282180   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:52.295079   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:52.295110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:52.364061   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:52.364087   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:52.364102   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:52.437868   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:52.437901   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:54.978182   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:54.991002   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:54.991068   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:55.023628   86402 cri.go:89] found id: ""
	I1104 12:09:55.023656   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.023663   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:55.023669   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:55.023715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:55.058524   86402 cri.go:89] found id: ""
	I1104 12:09:55.058548   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.058557   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:55.058564   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:55.058634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:55.095730   86402 cri.go:89] found id: ""
	I1104 12:09:55.095760   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.095772   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:55.095779   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:55.095837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:55.128341   86402 cri.go:89] found id: ""
	I1104 12:09:55.128365   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.128373   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:55.128379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:55.128438   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:55.160655   86402 cri.go:89] found id: ""
	I1104 12:09:55.160681   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.160693   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:55.160700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:55.160754   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:55.194050   86402 cri.go:89] found id: ""
	I1104 12:09:55.194077   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.194086   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:55.194091   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:55.194138   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:55.227655   86402 cri.go:89] found id: ""
	I1104 12:09:55.227694   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.227705   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:55.227712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:55.227810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:55.261106   86402 cri.go:89] found id: ""
	I1104 12:09:55.261137   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.261147   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:55.261157   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:55.261171   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:55.335577   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:55.335598   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:55.335610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:55.421339   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:55.421375   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:55.459936   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:55.459967   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:55.509346   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:55.509382   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:55.208869   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:57.707576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:59.708019   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.555996   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.556335   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.851071   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.851254   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.023608   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:58.036540   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:58.036599   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:58.075104   86402 cri.go:89] found id: ""
	I1104 12:09:58.075182   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.075198   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:58.075207   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:58.075271   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:58.109910   86402 cri.go:89] found id: ""
	I1104 12:09:58.109949   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.109961   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:58.109968   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:58.110038   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:58.142829   86402 cri.go:89] found id: ""
	I1104 12:09:58.142854   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.142865   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:58.142873   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:58.142924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:58.178125   86402 cri.go:89] found id: ""
	I1104 12:09:58.178153   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.178161   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:58.178168   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:58.178239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:58.214117   86402 cri.go:89] found id: ""
	I1104 12:09:58.214146   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.214156   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:58.214162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:58.214213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:58.244728   86402 cri.go:89] found id: ""
	I1104 12:09:58.244751   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.244759   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:58.244765   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:58.244809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:58.275542   86402 cri.go:89] found id: ""
	I1104 12:09:58.275568   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.275576   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:58.275582   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:58.275630   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:58.314909   86402 cri.go:89] found id: ""
	I1104 12:09:58.314935   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.314943   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:58.314952   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:58.314962   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:58.364361   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:58.364390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.378483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:58.378517   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:58.442012   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:58.442033   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:58.442045   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:58.517260   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:58.517298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.057203   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:01.069937   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:01.070008   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:01.101672   86402 cri.go:89] found id: ""
	I1104 12:10:01.101698   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.101709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:01.101716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:01.101779   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:01.134672   86402 cri.go:89] found id: ""
	I1104 12:10:01.134701   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.134712   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:01.134719   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:01.134789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:01.167784   86402 cri.go:89] found id: ""
	I1104 12:10:01.167833   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.167845   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:01.167853   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:01.167945   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:01.201218   86402 cri.go:89] found id: ""
	I1104 12:10:01.201260   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.201271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:01.201281   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:01.201338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:01.234964   86402 cri.go:89] found id: ""
	I1104 12:10:01.234991   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.235000   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:01.235007   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:01.235069   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:01.267809   86402 cri.go:89] found id: ""
	I1104 12:10:01.267848   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.267881   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:01.267890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:01.267942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:01.303567   86402 cri.go:89] found id: ""
	I1104 12:10:01.303590   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.303598   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:01.303604   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:01.303648   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:01.342059   86402 cri.go:89] found id: ""
	I1104 12:10:01.342088   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.342099   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:01.342109   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:01.342142   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:01.354845   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:01.354867   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:01.423426   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:01.423447   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:01.423459   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:01.498979   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:01.499018   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.537658   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:01.537691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:02.208192   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.209058   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.055266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.056457   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.350820   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.850435   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.088653   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:04.103506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:04.103576   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:04.137574   86402 cri.go:89] found id: ""
	I1104 12:10:04.137602   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.137612   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:04.137620   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:04.137684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:04.177624   86402 cri.go:89] found id: ""
	I1104 12:10:04.177662   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.177673   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:04.177681   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:04.177750   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:04.213829   86402 cri.go:89] found id: ""
	I1104 12:10:04.213850   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.213862   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:04.213870   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:04.213929   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:04.251112   86402 cri.go:89] found id: ""
	I1104 12:10:04.251143   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.251154   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:04.251162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:04.251227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:04.286005   86402 cri.go:89] found id: ""
	I1104 12:10:04.286036   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.286046   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:04.286053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:04.286118   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:04.317628   86402 cri.go:89] found id: ""
	I1104 12:10:04.317656   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.317667   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:04.317674   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:04.317742   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:04.351663   86402 cri.go:89] found id: ""
	I1104 12:10:04.351687   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.351695   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:04.351700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:04.351755   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:04.385818   86402 cri.go:89] found id: ""
	I1104 12:10:04.385842   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.385850   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:04.385858   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:04.385880   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:04.467141   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:04.467179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:04.503669   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:04.503700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.557237   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:04.557303   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:04.570484   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:04.570520   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:04.635099   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:06.708483   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:09.207171   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:05.556612   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.056976   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:06.350422   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.351537   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.351962   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:07.135741   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:07.148039   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:07.148132   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:07.185171   86402 cri.go:89] found id: ""
	I1104 12:10:07.185196   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.185205   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:07.185211   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:07.185280   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:07.217097   86402 cri.go:89] found id: ""
	I1104 12:10:07.217126   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.217137   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:07.217144   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:07.217204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:07.250079   86402 cri.go:89] found id: ""
	I1104 12:10:07.250108   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.250116   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:07.250121   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:07.250169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:07.283423   86402 cri.go:89] found id: ""
	I1104 12:10:07.283463   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.283475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:07.283482   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:07.283554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:07.316461   86402 cri.go:89] found id: ""
	I1104 12:10:07.316490   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.316507   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:07.316513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:07.316569   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:07.361981   86402 cri.go:89] found id: ""
	I1104 12:10:07.362010   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.362018   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:07.362024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:07.362087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:07.397834   86402 cri.go:89] found id: ""
	I1104 12:10:07.397867   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.397878   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:07.397886   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:07.397948   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:07.429379   86402 cri.go:89] found id: ""
	I1104 12:10:07.429407   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.429416   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:07.429425   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:07.429438   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:07.495294   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.495322   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:07.495334   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:07.578504   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:07.578546   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:07.617172   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:07.617201   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:07.667168   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:07.667204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.181802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:10.196017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:10.196084   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:10.228243   86402 cri.go:89] found id: ""
	I1104 12:10:10.228272   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.228282   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:10.228289   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:10.228347   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:10.262110   86402 cri.go:89] found id: ""
	I1104 12:10:10.262143   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.262152   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:10.262161   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:10.262218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:10.297776   86402 cri.go:89] found id: ""
	I1104 12:10:10.297812   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.297823   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:10.297830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:10.297877   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:10.332645   86402 cri.go:89] found id: ""
	I1104 12:10:10.332672   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.332680   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:10.332685   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:10.332730   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:10.366703   86402 cri.go:89] found id: ""
	I1104 12:10:10.366735   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.366746   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:10.366754   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:10.366809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:10.399500   86402 cri.go:89] found id: ""
	I1104 12:10:10.399526   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.399534   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:10.399539   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:10.399634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:10.434898   86402 cri.go:89] found id: ""
	I1104 12:10:10.434932   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.434943   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:10.434951   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:10.435022   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:10.472159   86402 cri.go:89] found id: ""
	I1104 12:10:10.472189   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.472201   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:10.472225   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:10.472246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:10.528710   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:10.528769   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.541943   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:10.541973   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:10.621819   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:10.621843   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:10.621855   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:10.698301   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:10.698335   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:11.208069   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.707594   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.556520   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.056160   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:15.056984   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:12.851001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:14.851591   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.235151   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:13.247511   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:13.247585   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:13.278546   86402 cri.go:89] found id: ""
	I1104 12:10:13.278576   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.278586   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:13.278592   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:13.278655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:13.310297   86402 cri.go:89] found id: ""
	I1104 12:10:13.310325   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.310334   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:13.310340   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:13.310394   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:13.344110   86402 cri.go:89] found id: ""
	I1104 12:10:13.344139   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.344150   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:13.344158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:13.344210   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:13.379778   86402 cri.go:89] found id: ""
	I1104 12:10:13.379806   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.379817   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:13.379824   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:13.379872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:13.411763   86402 cri.go:89] found id: ""
	I1104 12:10:13.411795   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.411806   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:13.411813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:13.411872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:13.445192   86402 cri.go:89] found id: ""
	I1104 12:10:13.445217   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.445235   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:13.445243   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:13.445297   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:13.478518   86402 cri.go:89] found id: ""
	I1104 12:10:13.478549   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.478561   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:13.478569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:13.478710   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:13.513852   86402 cri.go:89] found id: ""
	I1104 12:10:13.513878   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.513886   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:13.513895   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:13.513909   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:13.590413   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:13.590439   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:13.590454   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:13.664575   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:13.664608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.700616   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:13.700644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:13.751113   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:13.751147   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:16.264311   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:16.277443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:16.277508   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:16.309983   86402 cri.go:89] found id: ""
	I1104 12:10:16.310010   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.310020   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:16.310025   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:16.310073   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:16.358281   86402 cri.go:89] found id: ""
	I1104 12:10:16.358305   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.358312   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:16.358317   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:16.358376   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:16.394455   86402 cri.go:89] found id: ""
	I1104 12:10:16.394485   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.394497   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:16.394503   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:16.394571   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:16.430606   86402 cri.go:89] found id: ""
	I1104 12:10:16.430638   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.430648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:16.430655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:16.430716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:16.464402   86402 cri.go:89] found id: ""
	I1104 12:10:16.464439   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.464450   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:16.464458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:16.464517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:16.497985   86402 cri.go:89] found id: ""
	I1104 12:10:16.498009   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.498017   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:16.498022   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:16.498076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:16.531255   86402 cri.go:89] found id: ""
	I1104 12:10:16.531289   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.531301   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:16.531309   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:16.531372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:16.566176   86402 cri.go:89] found id: ""
	I1104 12:10:16.566204   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.566213   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:16.566228   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:16.566243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:16.634157   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:16.634196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:16.634218   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:16.206939   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:18.208360   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.555513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.556105   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.351026   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.351294   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:16.710518   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:16.710550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:16.746572   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:16.746608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:16.797146   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:16.797179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.310286   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:19.323409   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:19.323473   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:19.360864   86402 cri.go:89] found id: ""
	I1104 12:10:19.360893   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.360902   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:19.360907   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:19.360962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:19.400127   86402 cri.go:89] found id: ""
	I1104 12:10:19.400155   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.400167   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:19.400174   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:19.400230   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:19.433023   86402 cri.go:89] found id: ""
	I1104 12:10:19.433049   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.433057   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:19.433062   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:19.433123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:19.467786   86402 cri.go:89] found id: ""
	I1104 12:10:19.467810   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.467819   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:19.467825   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:19.467875   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:19.498411   86402 cri.go:89] found id: ""
	I1104 12:10:19.498436   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.498444   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:19.498455   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:19.498502   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:19.532146   86402 cri.go:89] found id: ""
	I1104 12:10:19.532171   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.532179   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:19.532184   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:19.532234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:19.567271   86402 cri.go:89] found id: ""
	I1104 12:10:19.567294   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.567302   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:19.567308   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:19.567369   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:19.608233   86402 cri.go:89] found id: ""
	I1104 12:10:19.608265   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.608279   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:19.608289   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:19.608304   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:19.649039   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:19.649071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:19.702129   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:19.702168   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.716749   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:19.716776   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:19.787538   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:19.787560   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:19.787572   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:20.208694   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.708289   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.556715   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.557173   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.851010   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.852944   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.368982   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:22.382889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:22.382962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:22.418672   86402 cri.go:89] found id: ""
	I1104 12:10:22.418698   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.418709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:22.418716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:22.418782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:22.451675   86402 cri.go:89] found id: ""
	I1104 12:10:22.451704   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.451715   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:22.451723   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:22.451785   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:22.488520   86402 cri.go:89] found id: ""
	I1104 12:10:22.488549   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.488561   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:22.488567   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:22.488631   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:22.530288   86402 cri.go:89] found id: ""
	I1104 12:10:22.530312   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.530321   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:22.530326   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:22.530382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:22.564929   86402 cri.go:89] found id: ""
	I1104 12:10:22.564958   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.564970   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:22.564977   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:22.565036   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:22.598015   86402 cri.go:89] found id: ""
	I1104 12:10:22.598042   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.598051   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:22.598056   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:22.598160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:22.632894   86402 cri.go:89] found id: ""
	I1104 12:10:22.632921   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.632930   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:22.632935   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:22.633001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:22.665194   86402 cri.go:89] found id: ""
	I1104 12:10:22.665218   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.665245   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:22.665257   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:22.665272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:22.717731   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:22.717763   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:22.732671   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:22.732698   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:22.823908   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:22.823946   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:22.823963   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.907812   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:22.907848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.449308   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:25.461694   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:25.461751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:25.493036   86402 cri.go:89] found id: ""
	I1104 12:10:25.493061   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.493068   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:25.493075   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:25.493122   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:25.525084   86402 cri.go:89] found id: ""
	I1104 12:10:25.525116   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.525128   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:25.525135   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:25.525196   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:25.561380   86402 cri.go:89] found id: ""
	I1104 12:10:25.561424   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.561436   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:25.561444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:25.561499   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:25.595429   86402 cri.go:89] found id: ""
	I1104 12:10:25.595453   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.595468   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:25.595474   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:25.595521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:25.627409   86402 cri.go:89] found id: ""
	I1104 12:10:25.627436   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.627445   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:25.627450   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:25.627497   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:25.661048   86402 cri.go:89] found id: ""
	I1104 12:10:25.661073   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.661082   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:25.661088   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:25.661135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:25.698882   86402 cri.go:89] found id: ""
	I1104 12:10:25.698912   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.698920   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:25.698926   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:25.698978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:25.733355   86402 cri.go:89] found id: ""
	I1104 12:10:25.733397   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.733409   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:25.733420   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:25.733435   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:25.784871   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:25.784908   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:25.798715   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:25.798740   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:25.870362   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:25.870383   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:25.870397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:25.950565   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:25.950598   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.209496   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:27.706991   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:29.708209   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.055597   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.055845   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:30.056584   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.351027   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.851204   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.488258   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:28.506058   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:28.506114   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:28.566325   86402 cri.go:89] found id: ""
	I1104 12:10:28.566351   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.566358   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:28.566364   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:28.566413   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:28.612753   86402 cri.go:89] found id: ""
	I1104 12:10:28.612781   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.612790   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:28.612796   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:28.612854   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:28.647082   86402 cri.go:89] found id: ""
	I1104 12:10:28.647109   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.647120   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:28.647128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:28.647205   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:28.683197   86402 cri.go:89] found id: ""
	I1104 12:10:28.683227   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.683239   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:28.683247   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:28.683299   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:28.718139   86402 cri.go:89] found id: ""
	I1104 12:10:28.718175   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.718186   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:28.718194   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:28.718253   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:28.749689   86402 cri.go:89] found id: ""
	I1104 12:10:28.749721   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.749732   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:28.749739   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:28.749803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:28.786824   86402 cri.go:89] found id: ""
	I1104 12:10:28.786851   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.786859   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:28.786864   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:28.786925   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:28.822833   86402 cri.go:89] found id: ""
	I1104 12:10:28.822856   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.822865   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:28.822872   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:28.822884   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:28.835267   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:28.835298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:28.900051   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:28.900076   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:28.900089   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:28.979867   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:28.979912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:29.017294   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:29.017327   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:31.569559   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:31.582065   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:31.582136   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:31.614924   86402 cri.go:89] found id: ""
	I1104 12:10:31.614952   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.614960   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:31.614966   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:31.615029   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:31.647178   86402 cri.go:89] found id: ""
	I1104 12:10:31.647204   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.647212   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:31.647218   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:31.647277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:31.678723   86402 cri.go:89] found id: ""
	I1104 12:10:31.678749   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.678761   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:31.678769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:31.678819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:31.709787   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.208234   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:32.555978   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.557026   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.351700   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:33.850976   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:35.851636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.713013   86402 cri.go:89] found id: ""
	I1104 12:10:31.713036   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.713043   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:31.713048   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:31.713092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:31.746564   86402 cri.go:89] found id: ""
	I1104 12:10:31.746591   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.746600   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:31.746605   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:31.746658   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:31.779559   86402 cri.go:89] found id: ""
	I1104 12:10:31.779586   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.779594   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:31.779601   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:31.779652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:31.812047   86402 cri.go:89] found id: ""
	I1104 12:10:31.812076   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.812087   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:31.812094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:31.812163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:31.845479   86402 cri.go:89] found id: ""
	I1104 12:10:31.845510   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.845522   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:31.845532   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:31.845551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:31.909399   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:31.909423   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:31.909434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:31.985994   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:31.986031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:32.023222   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:32.023255   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:32.074429   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:32.074467   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.588202   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:34.600925   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:34.600994   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:34.632718   86402 cri.go:89] found id: ""
	I1104 12:10:34.632743   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.632754   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:34.632763   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:34.632813   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:34.665553   86402 cri.go:89] found id: ""
	I1104 12:10:34.665576   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.665585   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:34.665590   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:34.665641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:34.700059   86402 cri.go:89] found id: ""
	I1104 12:10:34.700081   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.700089   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:34.700094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:34.700141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:34.732940   86402 cri.go:89] found id: ""
	I1104 12:10:34.732962   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.732970   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:34.732978   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:34.733023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:34.764580   86402 cri.go:89] found id: ""
	I1104 12:10:34.764610   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.764618   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:34.764624   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:34.764680   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:34.798030   86402 cri.go:89] found id: ""
	I1104 12:10:34.798053   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.798061   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:34.798067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:34.798115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:34.829847   86402 cri.go:89] found id: ""
	I1104 12:10:34.829876   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.829884   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:34.829889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:34.829946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:34.862764   86402 cri.go:89] found id: ""
	I1104 12:10:34.862792   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.862804   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:34.862815   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:34.862828   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:34.912367   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:34.912397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.925347   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:34.925383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:34.990459   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:34.990486   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:34.990502   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:35.066765   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:35.066796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:36.706912   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.707144   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.056279   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:39.555433   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.349986   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:40.354694   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.602696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:37.615041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:37.615115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:37.646872   86402 cri.go:89] found id: ""
	I1104 12:10:37.646900   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.646911   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:37.646918   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:37.646977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:37.679770   86402 cri.go:89] found id: ""
	I1104 12:10:37.679797   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.679805   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:37.679810   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:37.679867   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:37.711693   86402 cri.go:89] found id: ""
	I1104 12:10:37.711720   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.711733   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:37.711743   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:37.711803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:37.746605   86402 cri.go:89] found id: ""
	I1104 12:10:37.746636   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.746648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:37.746656   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:37.746716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:37.778983   86402 cri.go:89] found id: ""
	I1104 12:10:37.779010   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.779020   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:37.779026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:37.779086   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:37.813293   86402 cri.go:89] found id: ""
	I1104 12:10:37.813321   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.813330   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:37.813335   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:37.813387   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:37.846181   86402 cri.go:89] found id: ""
	I1104 12:10:37.846209   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.846219   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:37.846226   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:37.846287   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:37.877485   86402 cri.go:89] found id: ""
	I1104 12:10:37.877520   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.877531   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:37.877541   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:37.877558   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:37.926704   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:37.926733   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:37.939771   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:37.939796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:38.003762   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:38.003783   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:38.003800   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:38.085419   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:38.085456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.625351   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:40.637380   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:40.637459   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:40.670274   86402 cri.go:89] found id: ""
	I1104 12:10:40.670303   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.670315   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:40.670322   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:40.670382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:40.703383   86402 cri.go:89] found id: ""
	I1104 12:10:40.703414   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.703427   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:40.703434   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:40.703481   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:40.739549   86402 cri.go:89] found id: ""
	I1104 12:10:40.739576   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.739586   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:40.739594   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:40.739651   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:40.775466   86402 cri.go:89] found id: ""
	I1104 12:10:40.775492   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.775502   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:40.775513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:40.775567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:40.810486   86402 cri.go:89] found id: ""
	I1104 12:10:40.810515   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.810525   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:40.810533   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:40.810593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:40.844277   86402 cri.go:89] found id: ""
	I1104 12:10:40.844309   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.844321   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:40.844329   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:40.844391   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:40.878699   86402 cri.go:89] found id: ""
	I1104 12:10:40.878728   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.878739   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:40.878746   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:40.878804   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:40.913888   86402 cri.go:89] found id: ""
	I1104 12:10:40.913913   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.913921   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:40.913929   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:40.913939   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:40.966854   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:40.966892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:40.980483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:40.980510   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:41.046059   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:41.046085   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:41.046100   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:41.129746   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:41.129779   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.707964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.207804   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.057019   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.555947   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.850057   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.851467   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.667029   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:43.680024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:43.680092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:43.714185   86402 cri.go:89] found id: ""
	I1104 12:10:43.714218   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.714227   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:43.714235   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:43.714294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:43.749493   86402 cri.go:89] found id: ""
	I1104 12:10:43.749515   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.749523   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:43.749529   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:43.749588   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:43.785400   86402 cri.go:89] found id: ""
	I1104 12:10:43.785426   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.785437   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:43.785444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:43.785507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:43.818465   86402 cri.go:89] found id: ""
	I1104 12:10:43.818505   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.818517   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:43.818524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:43.818573   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:43.850232   86402 cri.go:89] found id: ""
	I1104 12:10:43.850262   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.850272   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:43.850279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:43.850337   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:43.882806   86402 cri.go:89] found id: ""
	I1104 12:10:43.882840   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.882851   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:43.882859   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:43.882920   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:43.919449   86402 cri.go:89] found id: ""
	I1104 12:10:43.919476   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.919486   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:43.919493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:43.919556   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:43.953761   86402 cri.go:89] found id: ""
	I1104 12:10:43.953791   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.953801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:43.953812   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:43.953825   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:44.005559   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:44.005594   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:44.019431   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:44.019456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:44.094436   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:44.094457   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:44.094470   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:44.174026   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:44.174061   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:45.707449   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:47.709901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.557050   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:48.557552   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.851720   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:49.350269   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.712021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:46.724258   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:46.724318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:46.754472   86402 cri.go:89] found id: ""
	I1104 12:10:46.754501   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.754510   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:46.754515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:46.754563   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:46.790184   86402 cri.go:89] found id: ""
	I1104 12:10:46.790209   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.790219   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:46.790226   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:46.790284   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:46.824840   86402 cri.go:89] found id: ""
	I1104 12:10:46.824865   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.824875   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:46.824882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:46.824952   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:46.857295   86402 cri.go:89] found id: ""
	I1104 12:10:46.857329   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.857360   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:46.857369   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:46.857430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:46.889540   86402 cri.go:89] found id: ""
	I1104 12:10:46.889571   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.889582   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:46.889588   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:46.889652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:46.930165   86402 cri.go:89] found id: ""
	I1104 12:10:46.930195   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.930204   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:46.930210   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:46.930266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:46.965964   86402 cri.go:89] found id: ""
	I1104 12:10:46.965994   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.966006   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:46.966013   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:46.966060   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:47.002700   86402 cri.go:89] found id: ""
	I1104 12:10:47.002732   86402 logs.go:282] 0 containers: []
	W1104 12:10:47.002741   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:47.002749   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:47.002760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:47.056362   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:47.056392   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:47.070447   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:47.070472   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:47.143207   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:47.143240   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:47.143256   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:47.223985   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:47.224015   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:49.765870   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:49.778288   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:49.778352   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:49.812012   86402 cri.go:89] found id: ""
	I1104 12:10:49.812044   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.812054   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:49.812064   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:49.812115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:49.847260   86402 cri.go:89] found id: ""
	I1104 12:10:49.847290   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.847301   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:49.847308   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:49.847361   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:49.877397   86402 cri.go:89] found id: ""
	I1104 12:10:49.877419   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.877427   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:49.877432   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:49.877486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:49.912453   86402 cri.go:89] found id: ""
	I1104 12:10:49.912484   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.912499   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:49.912506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:49.912572   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:49.948374   86402 cri.go:89] found id: ""
	I1104 12:10:49.948404   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.948416   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:49.948422   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:49.948488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:49.982190   86402 cri.go:89] found id: ""
	I1104 12:10:49.982216   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.982228   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:49.982236   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:49.982294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:50.014396   86402 cri.go:89] found id: ""
	I1104 12:10:50.014426   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.014437   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:50.014445   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:50.014507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:50.051770   86402 cri.go:89] found id: ""
	I1104 12:10:50.051793   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.051801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:50.051809   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:50.051820   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:50.116158   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:50.116185   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:50.116202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:50.194382   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:50.194431   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:50.235957   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:50.235983   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:50.290720   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:50.290750   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:50.207837   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.207972   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.208026   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.055965   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:53.056014   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:55.056318   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.850513   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.351193   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.805144   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:52.817686   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:52.817753   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:52.852470   86402 cri.go:89] found id: ""
	I1104 12:10:52.852492   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.852546   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:52.852559   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:52.852603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:52.889682   86402 cri.go:89] found id: ""
	I1104 12:10:52.889705   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.889714   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:52.889720   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:52.889773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:52.924490   86402 cri.go:89] found id: ""
	I1104 12:10:52.924525   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.924537   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:52.924544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:52.924604   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:52.957055   86402 cri.go:89] found id: ""
	I1104 12:10:52.957085   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.957094   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:52.957099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:52.957143   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:52.993379   86402 cri.go:89] found id: ""
	I1104 12:10:52.993411   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.993423   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:52.993430   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:52.993493   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:53.027365   86402 cri.go:89] found id: ""
	I1104 12:10:53.027398   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.027407   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:53.027412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:53.027488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:53.061048   86402 cri.go:89] found id: ""
	I1104 12:10:53.061074   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.061082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:53.061089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:53.061163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:53.101867   86402 cri.go:89] found id: ""
	I1104 12:10:53.101894   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.101904   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:53.101915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:53.101927   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:53.152314   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:53.152351   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:53.165630   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:53.165657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:53.239717   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:53.239739   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:53.239753   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:53.318140   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:53.318186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:55.857443   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:55.869524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:55.869608   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:55.900719   86402 cri.go:89] found id: ""
	I1104 12:10:55.900743   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.900753   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:55.900761   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:55.900821   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:55.932699   86402 cri.go:89] found id: ""
	I1104 12:10:55.932724   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.932734   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:55.932741   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:55.932798   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:55.964729   86402 cri.go:89] found id: ""
	I1104 12:10:55.964758   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.964767   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:55.964775   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:55.964823   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:55.997870   86402 cri.go:89] found id: ""
	I1104 12:10:55.997897   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.997907   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:55.997915   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:55.997977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:56.031707   86402 cri.go:89] found id: ""
	I1104 12:10:56.031736   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.031744   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:56.031749   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:56.031805   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:56.070839   86402 cri.go:89] found id: ""
	I1104 12:10:56.070863   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.070871   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:56.070877   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:56.070922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:56.109364   86402 cri.go:89] found id: ""
	I1104 12:10:56.109393   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.109404   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:56.109412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:56.109474   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:56.143369   86402 cri.go:89] found id: ""
	I1104 12:10:56.143402   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.143414   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:56.143424   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:56.143437   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:56.156924   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:56.156952   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:56.223624   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:56.223647   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:56.223659   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:56.302040   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:56.302082   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:56.343102   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:56.343150   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:56.209085   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.712250   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:57.056463   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:59.555744   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:56.850242   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.850955   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.896551   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:58.909034   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:58.909110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:58.944520   86402 cri.go:89] found id: ""
	I1104 12:10:58.944550   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.944559   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:58.944565   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:58.944612   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:58.980137   86402 cri.go:89] found id: ""
	I1104 12:10:58.980167   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.980176   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:58.980181   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:58.980231   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:59.014505   86402 cri.go:89] found id: ""
	I1104 12:10:59.014536   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.014545   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:59.014551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:59.014602   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:59.050616   86402 cri.go:89] found id: ""
	I1104 12:10:59.050642   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.050652   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:59.050659   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:59.050718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:59.084328   86402 cri.go:89] found id: ""
	I1104 12:10:59.084358   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.084369   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:59.084376   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:59.084449   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:59.116607   86402 cri.go:89] found id: ""
	I1104 12:10:59.116633   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.116642   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:59.116649   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:59.116711   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:59.149727   86402 cri.go:89] found id: ""
	I1104 12:10:59.149754   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.149765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:59.149773   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:59.149832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:59.182992   86402 cri.go:89] found id: ""
	I1104 12:10:59.183023   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.183035   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:59.183045   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:59.183059   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:59.234826   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:59.234862   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:59.248401   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:59.248427   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:59.317143   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:59.317171   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:59.317186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:59.397294   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:59.397336   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:01.208022   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.707297   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.556680   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:04.055902   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.350865   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.850510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.933617   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:01.946458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:01.946537   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:01.981652   86402 cri.go:89] found id: ""
	I1104 12:11:01.981682   86402 logs.go:282] 0 containers: []
	W1104 12:11:01.981693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:01.981701   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:01.981757   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:02.014245   86402 cri.go:89] found id: ""
	I1104 12:11:02.014273   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.014282   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:02.014287   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:02.014350   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:02.047386   86402 cri.go:89] found id: ""
	I1104 12:11:02.047409   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.047420   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:02.047427   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:02.047488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:02.086427   86402 cri.go:89] found id: ""
	I1104 12:11:02.086464   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.086475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:02.086483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:02.086544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:02.120219   86402 cri.go:89] found id: ""
	I1104 12:11:02.120246   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.120255   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:02.120260   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:02.120318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:02.153832   86402 cri.go:89] found id: ""
	I1104 12:11:02.153864   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.153876   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:02.153884   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:02.153950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:02.186237   86402 cri.go:89] found id: ""
	I1104 12:11:02.186266   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.186278   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:02.186285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:02.186351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:02.219238   86402 cri.go:89] found id: ""
	I1104 12:11:02.219269   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.219280   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:02.219290   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:02.219301   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:02.301062   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:02.301099   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:02.358585   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:02.358617   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:02.414153   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:02.414200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:02.428429   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:02.428456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:02.497040   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:04.998089   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:05.010890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:05.010947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:05.046483   86402 cri.go:89] found id: ""
	I1104 12:11:05.046513   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.046523   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:05.046534   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:05.046594   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:05.079487   86402 cri.go:89] found id: ""
	I1104 12:11:05.079516   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.079527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:05.079535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:05.079595   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:05.110968   86402 cri.go:89] found id: ""
	I1104 12:11:05.110997   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.111004   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:05.111010   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:05.111057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:05.143372   86402 cri.go:89] found id: ""
	I1104 12:11:05.143398   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.143408   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:05.143415   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:05.143484   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:05.174691   86402 cri.go:89] found id: ""
	I1104 12:11:05.174717   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.174730   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:05.174737   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:05.174802   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:05.210005   86402 cri.go:89] found id: ""
	I1104 12:11:05.210025   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.210033   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:05.210041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:05.210085   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:05.244874   86402 cri.go:89] found id: ""
	I1104 12:11:05.244899   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.244908   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:05.244913   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:05.244956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:05.276517   86402 cri.go:89] found id: ""
	I1104 12:11:05.276547   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.276557   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:05.276568   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:05.276581   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:05.354057   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:05.354087   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:05.390848   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:05.390887   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:05.442659   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:05.442692   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:05.456290   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:05.456315   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:05.530310   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:06.207301   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.208333   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.056314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.556910   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.350241   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.350774   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:10.351274   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.030545   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:08.043598   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:08.043654   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:08.081604   86402 cri.go:89] found id: ""
	I1104 12:11:08.081634   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.081644   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:08.081652   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:08.081712   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:08.135357   86402 cri.go:89] found id: ""
	I1104 12:11:08.135388   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.135398   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:08.135405   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:08.135470   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:08.173275   86402 cri.go:89] found id: ""
	I1104 12:11:08.173298   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.173306   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:08.173311   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:08.173371   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:08.213415   86402 cri.go:89] found id: ""
	I1104 12:11:08.213439   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.213448   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:08.213454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:08.213507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:08.244759   86402 cri.go:89] found id: ""
	I1104 12:11:08.244791   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.244802   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:08.244809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:08.244870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:08.276643   86402 cri.go:89] found id: ""
	I1104 12:11:08.276666   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.276675   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:08.276682   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:08.276751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:08.308425   86402 cri.go:89] found id: ""
	I1104 12:11:08.308451   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.308462   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:08.308469   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:08.308527   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:08.340645   86402 cri.go:89] found id: ""
	I1104 12:11:08.340675   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.340687   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:08.340698   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:08.340712   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:08.413171   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.413196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:08.413214   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:08.496208   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:08.496246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:08.534527   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:08.534560   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:08.583515   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:08.583550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:11.099000   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:11.112158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:11.112236   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:11.145718   86402 cri.go:89] found id: ""
	I1104 12:11:11.145748   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.145758   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:11.145765   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:11.145958   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:11.177270   86402 cri.go:89] found id: ""
	I1104 12:11:11.177301   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.177317   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:11.177325   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:11.177396   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:11.209696   86402 cri.go:89] found id: ""
	I1104 12:11:11.209722   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.209737   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:11.209742   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:11.209789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:11.244034   86402 cri.go:89] found id: ""
	I1104 12:11:11.244061   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.244069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:11.244078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:11.244135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:11.276437   86402 cri.go:89] found id: ""
	I1104 12:11:11.276462   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.276470   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:11.276476   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:11.276530   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:11.308954   86402 cri.go:89] found id: ""
	I1104 12:11:11.308980   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.308988   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:11.308994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:11.309057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:11.342175   86402 cri.go:89] found id: ""
	I1104 12:11:11.342199   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.342207   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:11.342211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:11.342266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:11.374810   86402 cri.go:89] found id: ""
	I1104 12:11:11.374839   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.374851   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:11.374860   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:11.374875   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:11.443638   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:11.443667   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:11.443681   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:11.526996   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:11.527031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:11.568297   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:11.568325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:11.616229   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:11.616264   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:10.707934   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.708053   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:11.055469   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:13.055645   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.057348   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.851253   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.350857   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:14.130707   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:14.143045   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:14.143116   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:14.185422   86402 cri.go:89] found id: ""
	I1104 12:11:14.185461   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.185471   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:14.185477   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:14.185525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:14.219890   86402 cri.go:89] found id: ""
	I1104 12:11:14.219918   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.219928   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:14.219938   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:14.219985   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:14.253256   86402 cri.go:89] found id: ""
	I1104 12:11:14.253286   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.253296   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:14.253304   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:14.253364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:14.286228   86402 cri.go:89] found id: ""
	I1104 12:11:14.286259   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.286271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:14.286279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:14.286342   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:14.317065   86402 cri.go:89] found id: ""
	I1104 12:11:14.317091   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.317101   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:14.317106   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:14.317168   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:14.348540   86402 cri.go:89] found id: ""
	I1104 12:11:14.348575   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.348583   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:14.348589   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:14.348647   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:14.380824   86402 cri.go:89] found id: ""
	I1104 12:11:14.380849   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.380858   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:14.380863   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:14.380924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:14.413757   86402 cri.go:89] found id: ""
	I1104 12:11:14.413785   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.413796   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:14.413806   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:14.413822   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:14.479311   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:14.479336   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:14.479349   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:14.572923   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:14.572959   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:14.620277   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:14.620359   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:14.674276   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:14.674310   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:15.208704   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.708523   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.555941   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.556233   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.351751   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.851087   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.187062   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:17.200179   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:17.200260   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:17.232208   86402 cri.go:89] found id: ""
	I1104 12:11:17.232231   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.232238   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:17.232244   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:17.232298   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:17.266224   86402 cri.go:89] found id: ""
	I1104 12:11:17.266248   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.266257   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:17.266262   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:17.266320   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:17.301909   86402 cri.go:89] found id: ""
	I1104 12:11:17.301940   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.301948   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:17.301953   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:17.302005   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:17.339493   86402 cri.go:89] found id: ""
	I1104 12:11:17.339517   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.339530   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:17.339537   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:17.339600   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:17.373879   86402 cri.go:89] found id: ""
	I1104 12:11:17.373927   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.373938   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:17.373945   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:17.373996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:17.405533   86402 cri.go:89] found id: ""
	I1104 12:11:17.405562   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.405573   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:17.405583   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:17.405645   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:17.439421   86402 cri.go:89] found id: ""
	I1104 12:11:17.439451   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.439460   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:17.439468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:17.439532   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:17.474573   86402 cri.go:89] found id: ""
	I1104 12:11:17.474602   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.474613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:17.474623   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:17.474636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:17.524497   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:17.524536   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.538421   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:17.538460   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:17.607299   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:17.607323   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:17.607337   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:17.684181   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:17.684224   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.223600   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:20.237793   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:20.237865   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:20.279656   86402 cri.go:89] found id: ""
	I1104 12:11:20.279682   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.279693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:20.279700   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:20.279767   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:20.337980   86402 cri.go:89] found id: ""
	I1104 12:11:20.338009   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.338020   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:20.338027   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:20.338087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:20.383183   86402 cri.go:89] found id: ""
	I1104 12:11:20.383217   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.383226   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:20.383231   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:20.383282   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:20.416470   86402 cri.go:89] found id: ""
	I1104 12:11:20.416495   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.416505   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:20.416512   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:20.416570   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:20.451968   86402 cri.go:89] found id: ""
	I1104 12:11:20.452000   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.452011   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:20.452017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:20.452074   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:20.484800   86402 cri.go:89] found id: ""
	I1104 12:11:20.484823   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.484831   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:20.484837   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:20.484893   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:20.516263   86402 cri.go:89] found id: ""
	I1104 12:11:20.516292   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.516300   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:20.516306   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:20.516364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:20.548616   86402 cri.go:89] found id: ""
	I1104 12:11:20.548640   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.548651   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:20.548661   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:20.548674   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:20.599338   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:20.599368   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:20.613116   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:20.613148   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:20.678898   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:20.678924   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:20.678936   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:20.757570   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:20.757606   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.206649   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.207379   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.207579   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.056670   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.555101   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.350891   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.351318   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:23.293912   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:23.307037   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:23.307110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:23.341161   86402 cri.go:89] found id: ""
	I1104 12:11:23.341186   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.341195   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:23.341200   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:23.341277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:23.373462   86402 cri.go:89] found id: ""
	I1104 12:11:23.373491   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.373503   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:23.373510   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:23.373568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:23.404439   86402 cri.go:89] found id: ""
	I1104 12:11:23.404471   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.404482   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:23.404489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:23.404548   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:23.435224   86402 cri.go:89] found id: ""
	I1104 12:11:23.435256   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.435267   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:23.435274   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:23.435336   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:23.472593   86402 cri.go:89] found id: ""
	I1104 12:11:23.472622   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.472633   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:23.472641   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:23.472693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:23.503413   86402 cri.go:89] found id: ""
	I1104 12:11:23.503438   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.503447   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:23.503454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:23.503516   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:23.537582   86402 cri.go:89] found id: ""
	I1104 12:11:23.537610   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.537621   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:23.537628   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:23.537689   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:23.573799   86402 cri.go:89] found id: ""
	I1104 12:11:23.573824   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.573831   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:23.573838   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:23.573851   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:23.649239   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:23.649273   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.686518   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:23.686548   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:23.738955   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:23.738987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:23.751909   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:23.751935   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:23.827244   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.327902   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:26.339708   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:26.339784   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:26.369615   86402 cri.go:89] found id: ""
	I1104 12:11:26.369644   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.369653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:26.369659   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:26.369715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:26.402027   86402 cri.go:89] found id: ""
	I1104 12:11:26.402056   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.402065   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:26.402070   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:26.402123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:26.433483   86402 cri.go:89] found id: ""
	I1104 12:11:26.433512   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.433523   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:26.433529   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:26.433637   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:26.466403   86402 cri.go:89] found id: ""
	I1104 12:11:26.466442   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.466453   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:26.466468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:26.466524   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:26.499818   86402 cri.go:89] found id: ""
	I1104 12:11:26.499853   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.499864   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:26.499871   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:26.499930   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:26.537782   86402 cri.go:89] found id: ""
	I1104 12:11:26.537809   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.537822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:26.537830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:26.537890   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:26.574091   86402 cri.go:89] found id: ""
	I1104 12:11:26.574120   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.574131   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:26.574138   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:26.574199   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:26.607554   86402 cri.go:89] found id: ""
	I1104 12:11:26.607584   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.607596   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:26.607606   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:26.607620   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:26.657405   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:26.657443   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:26.670022   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:26.670046   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:11:26.707958   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.207380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.556568   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:28.557276   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.852761   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.351303   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	W1104 12:11:26.736238   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.736266   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:26.736278   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:26.816277   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:26.816309   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:29.357639   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:29.371116   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:29.371204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:29.405569   86402 cri.go:89] found id: ""
	I1104 12:11:29.405595   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.405604   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:29.405611   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:29.405668   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:29.435669   86402 cri.go:89] found id: ""
	I1104 12:11:29.435697   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.435709   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:29.435716   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:29.435781   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:29.476208   86402 cri.go:89] found id: ""
	I1104 12:11:29.476236   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.476245   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:29.476251   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:29.476305   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:29.511446   86402 cri.go:89] found id: ""
	I1104 12:11:29.511474   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.511483   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:29.511489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:29.511541   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:29.543714   86402 cri.go:89] found id: ""
	I1104 12:11:29.543742   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.543754   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:29.543761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:29.543840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:29.577429   86402 cri.go:89] found id: ""
	I1104 12:11:29.577456   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.577466   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:29.577473   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:29.577534   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:29.608430   86402 cri.go:89] found id: ""
	I1104 12:11:29.608457   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.608475   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:29.608483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:29.608539   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:29.640029   86402 cri.go:89] found id: ""
	I1104 12:11:29.640057   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.640068   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:29.640078   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:29.640092   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:29.691170   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:29.691202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:29.704949   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:29.704987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:29.766856   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:29.766884   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:29.766898   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:29.847487   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:29.847525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:31.208725   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.709593   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:30.557500   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.056569   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:31.851101   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:34.350356   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:32.382925   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:32.395889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:32.395943   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:32.428711   86402 cri.go:89] found id: ""
	I1104 12:11:32.428736   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.428749   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:32.428755   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:32.428810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:32.463269   86402 cri.go:89] found id: ""
	I1104 12:11:32.463295   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.463307   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:32.463313   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:32.463372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:32.496098   86402 cri.go:89] found id: ""
	I1104 12:11:32.496125   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.496135   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:32.496142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:32.496213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:32.528729   86402 cri.go:89] found id: ""
	I1104 12:11:32.528760   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.528771   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:32.528778   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:32.528860   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:32.567290   86402 cri.go:89] found id: ""
	I1104 12:11:32.567321   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.567332   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:32.567338   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:32.567397   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:32.608932   86402 cri.go:89] found id: ""
	I1104 12:11:32.608962   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.608973   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:32.608980   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:32.609037   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:32.641128   86402 cri.go:89] found id: ""
	I1104 12:11:32.641155   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.641164   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:32.641171   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:32.641239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:32.675651   86402 cri.go:89] found id: ""
	I1104 12:11:32.675682   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.675694   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:32.675704   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:32.675719   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:32.742369   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:32.742406   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:32.742419   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:32.823371   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:32.823412   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.862243   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:32.862270   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:32.910961   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:32.910987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.425742   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:35.438553   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:35.438615   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:35.475160   86402 cri.go:89] found id: ""
	I1104 12:11:35.475189   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.475201   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:35.475209   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:35.475267   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:35.517193   86402 cri.go:89] found id: ""
	I1104 12:11:35.517239   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.517252   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:35.517260   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:35.517329   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:35.552941   86402 cri.go:89] found id: ""
	I1104 12:11:35.552967   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.552978   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:35.552985   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:35.553056   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:35.589960   86402 cri.go:89] found id: ""
	I1104 12:11:35.589983   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.589994   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:35.590001   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:35.590063   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:35.624546   86402 cri.go:89] found id: ""
	I1104 12:11:35.624575   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.624587   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:35.624595   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:35.624655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:35.657855   86402 cri.go:89] found id: ""
	I1104 12:11:35.657885   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.657896   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:35.657903   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:35.657957   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:35.691465   86402 cri.go:89] found id: ""
	I1104 12:11:35.691498   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.691509   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:35.691516   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:35.691587   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:35.727520   86402 cri.go:89] found id: ""
	I1104 12:11:35.727548   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.727558   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:35.727569   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:35.727584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:35.777876   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:35.777912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.790790   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:35.790817   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:35.856780   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:35.856805   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:35.856819   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:35.936769   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:35.936812   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:36.207096   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.707776   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:35.556694   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.055778   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:36.850946   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:39.350058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.474827   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:38.488151   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:38.488221   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:38.523010   86402 cri.go:89] found id: ""
	I1104 12:11:38.523042   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.523053   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:38.523061   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:38.523117   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:38.558065   86402 cri.go:89] found id: ""
	I1104 12:11:38.558093   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.558102   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:38.558107   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:38.558153   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:38.590676   86402 cri.go:89] found id: ""
	I1104 12:11:38.590704   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.590715   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:38.590723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:38.590780   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:38.623762   86402 cri.go:89] found id: ""
	I1104 12:11:38.623793   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.623804   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:38.623811   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:38.623870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:38.655918   86402 cri.go:89] found id: ""
	I1104 12:11:38.655947   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.655958   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:38.655966   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:38.656028   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:38.691200   86402 cri.go:89] found id: ""
	I1104 12:11:38.691228   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.691238   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:38.691245   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:38.691302   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:38.724725   86402 cri.go:89] found id: ""
	I1104 12:11:38.724748   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.724756   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:38.724761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:38.724819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:38.756333   86402 cri.go:89] found id: ""
	I1104 12:11:38.756360   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.756370   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:38.756381   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:38.756395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:38.807722   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:38.807756   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:38.821055   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:38.821079   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:38.886629   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:38.886656   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:38.886671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:38.960958   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:38.960999   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:41.503471   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:41.515994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:41.516065   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:41.549936   86402 cri.go:89] found id: ""
	I1104 12:11:41.549960   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.549968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:41.549975   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:41.550033   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:41.584565   86402 cri.go:89] found id: ""
	I1104 12:11:41.584590   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.584602   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:41.584610   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:41.584660   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:41.616427   86402 cri.go:89] found id: ""
	I1104 12:11:41.616450   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.616458   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:41.616463   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:41.616510   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:41.650835   86402 cri.go:89] found id: ""
	I1104 12:11:41.650864   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.650875   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:41.650882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:41.650946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:40.707926   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.207969   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:40.555616   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:42.555839   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:44.556749   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.351131   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.851925   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.685899   86402 cri.go:89] found id: ""
	I1104 12:11:41.685921   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.685928   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:41.685934   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:41.685979   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:41.718730   86402 cri.go:89] found id: ""
	I1104 12:11:41.718757   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.718773   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:41.718782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:41.718837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:41.748843   86402 cri.go:89] found id: ""
	I1104 12:11:41.748875   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.748887   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:41.748895   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:41.748963   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:41.780225   86402 cri.go:89] found id: ""
	I1104 12:11:41.780251   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.780260   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:41.780268   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:41.780285   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:41.830864   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:41.830893   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:41.844252   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:41.844279   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:41.908514   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:41.908542   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:41.908554   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:41.988545   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:41.988582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:44.527641   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:44.540026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:44.540108   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:44.574530   86402 cri.go:89] found id: ""
	I1104 12:11:44.574559   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.574570   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:44.574577   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:44.574638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:44.606073   86402 cri.go:89] found id: ""
	I1104 12:11:44.606103   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.606114   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:44.606121   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:44.606185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:44.639750   86402 cri.go:89] found id: ""
	I1104 12:11:44.639775   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.639784   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:44.639792   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:44.639850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:44.673528   86402 cri.go:89] found id: ""
	I1104 12:11:44.673557   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.673565   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:44.673573   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:44.673625   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:44.705928   86402 cri.go:89] found id: ""
	I1104 12:11:44.705956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.705966   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:44.705973   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:44.706032   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:44.736779   86402 cri.go:89] found id: ""
	I1104 12:11:44.736811   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.736822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:44.736830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:44.736886   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:44.769929   86402 cri.go:89] found id: ""
	I1104 12:11:44.769956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.769964   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:44.769970   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:44.770015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:44.800818   86402 cri.go:89] found id: ""
	I1104 12:11:44.800846   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.800855   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:44.800863   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:44.800873   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:44.853610   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:44.853641   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:44.866656   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:44.866683   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:44.936386   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:44.936412   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:44.936425   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:45.011789   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:45.011823   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:45.707030   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.707464   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.711329   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.557112   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.055647   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.351055   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:48.850134   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:50.851867   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.548672   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:47.563082   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:47.563157   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:47.598722   86402 cri.go:89] found id: ""
	I1104 12:11:47.598748   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.598756   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:47.598762   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:47.598809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:47.633376   86402 cri.go:89] found id: ""
	I1104 12:11:47.633412   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.633421   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:47.633428   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:47.633486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:47.666059   86402 cri.go:89] found id: ""
	I1104 12:11:47.666087   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.666095   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:47.666101   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:47.666147   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:47.700659   86402 cri.go:89] found id: ""
	I1104 12:11:47.700690   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.700704   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:47.700711   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:47.700771   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:47.732901   86402 cri.go:89] found id: ""
	I1104 12:11:47.732927   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.732934   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:47.732940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:47.732984   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:47.765371   86402 cri.go:89] found id: ""
	I1104 12:11:47.765398   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.765418   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:47.765425   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:47.765487   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:47.797043   86402 cri.go:89] found id: ""
	I1104 12:11:47.797077   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.797089   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:47.797096   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:47.797159   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:47.828140   86402 cri.go:89] found id: ""
	I1104 12:11:47.828172   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.828184   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:47.828194   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:47.828208   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:47.911398   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:47.911434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.948042   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:47.948071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:47.999603   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:47.999638   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:48.013818   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:48.013856   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:48.082679   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.583325   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:50.595272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:50.595346   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:50.630857   86402 cri.go:89] found id: ""
	I1104 12:11:50.630883   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.630892   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:50.630899   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:50.630965   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:50.663025   86402 cri.go:89] found id: ""
	I1104 12:11:50.663049   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.663058   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:50.663063   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:50.663109   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:50.695371   86402 cri.go:89] found id: ""
	I1104 12:11:50.695402   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.695413   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:50.695421   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:50.695480   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:50.728805   86402 cri.go:89] found id: ""
	I1104 12:11:50.728827   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.728836   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:50.728841   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:50.728902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:50.762837   86402 cri.go:89] found id: ""
	I1104 12:11:50.762868   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.762878   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:50.762885   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:50.762941   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:50.802531   86402 cri.go:89] found id: ""
	I1104 12:11:50.802556   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.802564   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:50.802569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:50.802613   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:50.835124   86402 cri.go:89] found id: ""
	I1104 12:11:50.835161   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.835173   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:50.835180   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:50.835234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:50.869265   86402 cri.go:89] found id: ""
	I1104 12:11:50.869295   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.869308   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:50.869318   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:50.869330   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:50.919371   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:50.919405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:50.932165   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:50.932195   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:50.993935   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.993959   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:50.993972   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:51.071816   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:51.071848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:52.208101   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:54.707467   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:51.056129   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.057025   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.349902   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.350302   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.608347   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:53.620842   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:53.620902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:53.652870   86402 cri.go:89] found id: ""
	I1104 12:11:53.652896   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.652909   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:53.652917   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:53.652980   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:53.684842   86402 cri.go:89] found id: ""
	I1104 12:11:53.684878   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.684889   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:53.684897   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:53.684956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:53.722505   86402 cri.go:89] found id: ""
	I1104 12:11:53.722531   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.722539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:53.722544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:53.722603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:53.753831   86402 cri.go:89] found id: ""
	I1104 12:11:53.753858   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.753866   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:53.753872   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:53.753918   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:53.786112   86402 cri.go:89] found id: ""
	I1104 12:11:53.786139   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.786150   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:53.786157   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:53.786218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:53.820446   86402 cri.go:89] found id: ""
	I1104 12:11:53.820472   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.820487   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:53.820493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:53.820552   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:53.855631   86402 cri.go:89] found id: ""
	I1104 12:11:53.855655   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.855665   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:53.855673   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:53.855727   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:53.887953   86402 cri.go:89] found id: ""
	I1104 12:11:53.887983   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.887994   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:53.888004   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:53.888023   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:53.954408   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:53.954430   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:53.954442   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:54.028549   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:54.028584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:54.070869   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:54.070895   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:54.123676   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:54.123715   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:56.639480   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:56.652651   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:56.652709   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:56.708211   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.555992   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:58.056271   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:57.350474   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.850830   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:56.689397   86402 cri.go:89] found id: ""
	I1104 12:11:56.689425   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.689443   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:56.689452   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:56.689517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:56.725197   86402 cri.go:89] found id: ""
	I1104 12:11:56.725234   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.725246   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:56.725254   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:56.725308   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:56.759043   86402 cri.go:89] found id: ""
	I1104 12:11:56.759073   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.759084   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:56.759090   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:56.759141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:56.792268   86402 cri.go:89] found id: ""
	I1104 12:11:56.792296   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.792307   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:56.792314   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:56.792375   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:56.823668   86402 cri.go:89] found id: ""
	I1104 12:11:56.823692   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.823702   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:56.823709   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:56.823769   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:56.861812   86402 cri.go:89] found id: ""
	I1104 12:11:56.861837   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.861845   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:56.861851   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:56.861902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:56.894037   86402 cri.go:89] found id: ""
	I1104 12:11:56.894067   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.894075   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:56.894080   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:56.894133   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:56.925603   86402 cri.go:89] found id: ""
	I1104 12:11:56.925634   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.925646   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:56.925656   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:56.925669   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:56.961504   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:56.961530   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:57.012666   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:57.012700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:57.025887   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:57.025921   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:57.097219   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:57.097257   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:57.097272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:59.671179   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:59.684642   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:59.684718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:59.721599   86402 cri.go:89] found id: ""
	I1104 12:11:59.721622   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.721631   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:59.721640   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:59.721693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:59.757423   86402 cri.go:89] found id: ""
	I1104 12:11:59.757453   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.757461   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:59.757466   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:59.757525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:59.794036   86402 cri.go:89] found id: ""
	I1104 12:11:59.794071   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.794081   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:59.794089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:59.794148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:59.830098   86402 cri.go:89] found id: ""
	I1104 12:11:59.830123   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.830134   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:59.830142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:59.830207   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:59.867791   86402 cri.go:89] found id: ""
	I1104 12:11:59.867815   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.867823   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:59.867828   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:59.867879   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:59.903579   86402 cri.go:89] found id: ""
	I1104 12:11:59.903607   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.903614   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:59.903620   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:59.903667   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:59.940955   86402 cri.go:89] found id: ""
	I1104 12:11:59.940977   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.940984   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:59.940989   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:59.941034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:59.977626   86402 cri.go:89] found id: ""
	I1104 12:11:59.977653   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.977663   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:59.977674   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:59.977687   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:00.032280   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:00.032312   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:00.045965   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:00.045991   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:00.123578   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:00.123608   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:00.123625   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:00.208309   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:00.208340   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:01.707661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.207140   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:00.555683   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:02.555797   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.556558   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851646   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851680   85759 pod_ready.go:82] duration metric: took 4m0.007179751s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:01.851691   85759 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:01.851701   85759 pod_ready.go:39] duration metric: took 4m4.052369029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:01.851721   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:01.851752   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:01.851805   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:01.891029   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:01.891056   85759 cri.go:89] found id: ""
	I1104 12:12:01.891066   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:01.891128   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.895134   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:01.895243   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:01.928058   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:01.928081   85759 cri.go:89] found id: ""
	I1104 12:12:01.928089   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:01.928134   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.931906   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:01.931974   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:01.972023   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:01.972052   85759 cri.go:89] found id: ""
	I1104 12:12:01.972062   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:01.972116   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.980811   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:01.980894   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.024013   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.024038   85759 cri.go:89] found id: ""
	I1104 12:12:02.024046   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:02.024108   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.028571   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.028641   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.063545   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:02.063570   85759 cri.go:89] found id: ""
	I1104 12:12:02.063580   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:02.063635   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.067582   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.067652   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.100954   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.100979   85759 cri.go:89] found id: ""
	I1104 12:12:02.100989   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:02.101038   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.105121   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.105182   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.137206   85759 cri.go:89] found id: ""
	I1104 12:12:02.137249   85759 logs.go:282] 0 containers: []
	W1104 12:12:02.137260   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.137268   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:02.137317   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:02.171499   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:02.171520   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.171526   85759 cri.go:89] found id: ""
	I1104 12:12:02.171535   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:02.171587   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.175646   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.179066   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:02.179084   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.249087   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:02.249126   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:02.262666   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:02.262692   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:02.316826   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:02.316856   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.351741   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:02.351766   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.400377   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:02.400409   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.448029   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:02.448059   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:02.975331   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:02.975367   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:03.026697   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.026739   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:03.140704   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:03.140753   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:03.192394   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:03.192427   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:03.236040   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:03.236071   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:03.275166   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:03.275194   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:05.813333   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.827697   85759 api_server.go:72] duration metric: took 4m15.741105379s to wait for apiserver process to appear ...
	I1104 12:12:05.827725   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:05.827763   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.827826   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.869552   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:05.869580   85759 cri.go:89] found id: ""
	I1104 12:12:05.869590   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:05.869642   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.873890   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.873954   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.914131   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:05.914153   85759 cri.go:89] found id: ""
	I1104 12:12:05.914161   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:05.914216   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.920980   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.921042   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.960930   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:05.960953   85759 cri.go:89] found id: ""
	I1104 12:12:05.960962   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:05.961018   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.965591   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.965653   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:06.000500   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:06.000520   85759 cri.go:89] found id: ""
	I1104 12:12:06.000526   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:06.000576   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.004775   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:06.004835   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:06.042011   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:06.042032   85759 cri.go:89] found id: ""
	I1104 12:12:06.042041   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:06.042102   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.047885   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:06.047952   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.084318   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:06.084341   85759 cri.go:89] found id: ""
	I1104 12:12:06.084349   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:06.084410   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.088487   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.088564   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.127693   85759 cri.go:89] found id: ""
	I1104 12:12:06.127721   85759 logs.go:282] 0 containers: []
	W1104 12:12:06.127730   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.127736   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:06.127780   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:06.165173   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.165199   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.165206   85759 cri.go:89] found id: ""
	I1104 12:12:06.165215   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:06.165302   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.169479   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.173154   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.173177   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.746303   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:02.758892   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:02.758967   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:02.792775   86402 cri.go:89] found id: ""
	I1104 12:12:02.792803   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.792815   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:02.792822   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:02.792878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:02.831073   86402 cri.go:89] found id: ""
	I1104 12:12:02.831097   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.831108   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:02.831115   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:02.831174   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:02.863530   86402 cri.go:89] found id: ""
	I1104 12:12:02.863557   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.863568   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:02.863574   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:02.863641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.894894   86402 cri.go:89] found id: ""
	I1104 12:12:02.894924   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.894934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:02.894942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.894996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.930052   86402 cri.go:89] found id: ""
	I1104 12:12:02.930081   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.930092   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:02.930100   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.930160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.964503   86402 cri.go:89] found id: ""
	I1104 12:12:02.964532   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.964544   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:02.964551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.964610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.998065   86402 cri.go:89] found id: ""
	I1104 12:12:02.998088   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.998096   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.998102   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:02.998148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:03.033579   86402 cri.go:89] found id: ""
	I1104 12:12:03.033604   86402 logs.go:282] 0 containers: []
	W1104 12:12:03.033613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:03.033621   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:03.033630   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:03.086215   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:03.086249   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:03.100100   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.100136   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:03.168116   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:03.168150   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:03.168165   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:03.253608   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:03.253642   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:05.792913   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.806494   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.806568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.854379   86402 cri.go:89] found id: ""
	I1104 12:12:05.854406   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.854417   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:05.854425   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.854503   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.886144   86402 cri.go:89] found id: ""
	I1104 12:12:05.886169   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.886179   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:05.886186   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.886248   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.917462   86402 cri.go:89] found id: ""
	I1104 12:12:05.917482   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.917492   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:05.917499   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.917550   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:05.954065   86402 cri.go:89] found id: ""
	I1104 12:12:05.954099   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.954110   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:05.954120   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:05.954194   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:05.990935   86402 cri.go:89] found id: ""
	I1104 12:12:05.990966   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.990977   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:05.990984   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:05.991050   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.032175   86402 cri.go:89] found id: ""
	I1104 12:12:06.032198   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.032206   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:06.032211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.032269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.069215   86402 cri.go:89] found id: ""
	I1104 12:12:06.069262   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.069275   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.069282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:06.069340   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:06.103065   86402 cri.go:89] found id: ""
	I1104 12:12:06.103106   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.103117   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:06.103127   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.103145   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:06.184111   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:06.184135   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.184149   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.272720   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.272760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.315596   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.315636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:06.376054   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.376110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.214237   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:08.707098   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:07.056531   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:09.056763   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:06.252295   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:06.252326   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:06.302739   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:06.302769   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:06.361279   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.361307   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.811335   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.811380   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.851356   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:06.851387   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.888753   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:06.888789   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.922406   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.922438   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.935028   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.935057   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:07.033975   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:07.034019   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:07.068680   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:07.068706   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:07.105150   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:07.105182   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:07.139258   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:07.139290   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.695630   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:12:09.701156   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:12:09.702414   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:09.702441   85759 api_server.go:131] duration metric: took 3.874707524s to wait for apiserver health ...
	I1104 12:12:09.702451   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:09.702475   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:09.702530   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:09.736867   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:09.736891   85759 cri.go:89] found id: ""
	I1104 12:12:09.736901   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:09.736956   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.741108   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:09.741176   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:09.780460   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:09.780483   85759 cri.go:89] found id: ""
	I1104 12:12:09.780490   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:09.780535   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.784698   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:09.784756   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:09.823042   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:09.823059   85759 cri.go:89] found id: ""
	I1104 12:12:09.823068   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:09.823123   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.826750   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:09.826803   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.859148   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:09.859175   85759 cri.go:89] found id: ""
	I1104 12:12:09.859185   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:09.859245   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.863676   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.863739   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.901737   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:09.901766   85759 cri.go:89] found id: ""
	I1104 12:12:09.901783   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:09.901843   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.905931   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.905993   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.942617   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.942637   85759 cri.go:89] found id: ""
	I1104 12:12:09.942644   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:09.942687   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.946420   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.946481   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.984891   85759 cri.go:89] found id: ""
	I1104 12:12:09.984921   85759 logs.go:282] 0 containers: []
	W1104 12:12:09.984932   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.984939   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:09.985000   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:10.018332   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.018357   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.018363   85759 cri.go:89] found id: ""
	I1104 12:12:10.018374   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:10.018434   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.022995   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.026853   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:10.026878   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:10.083384   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:10.083421   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:10.136576   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:10.136608   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.182808   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:10.182837   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.217017   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:10.217047   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:10.598972   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:10.599010   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:10.638827   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:10.638868   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:10.652880   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:10.652923   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:10.700645   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:10.700675   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:10.734860   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:10.734890   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:10.774613   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:10.774647   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:10.808375   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:10.808403   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:10.876130   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:10.876165   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:08.890463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:08.904272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:08.904354   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:08.935677   86402 cri.go:89] found id: ""
	I1104 12:12:08.935701   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.935710   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:08.935715   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:08.935761   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:08.966969   86402 cri.go:89] found id: ""
	I1104 12:12:08.966993   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.967004   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:08.967011   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:08.967072   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:08.998753   86402 cri.go:89] found id: ""
	I1104 12:12:08.998778   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.998786   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:08.998790   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:08.998852   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.031901   86402 cri.go:89] found id: ""
	I1104 12:12:09.031925   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.031934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:09.031940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.032000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.071478   86402 cri.go:89] found id: ""
	I1104 12:12:09.071500   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.071508   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:09.071513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.071564   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.107593   86402 cri.go:89] found id: ""
	I1104 12:12:09.107621   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.107629   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:09.107635   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.107693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.140899   86402 cri.go:89] found id: ""
	I1104 12:12:09.140923   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.140934   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.140942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:09.141000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:09.174279   86402 cri.go:89] found id: ""
	I1104 12:12:09.174307   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.174318   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:09.174330   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:09.174405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:09.226340   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:09.226371   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:09.239573   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:09.239600   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:09.306180   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:09.306201   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:09.306212   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:09.385039   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:09.385072   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:13.475909   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:13.475946   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.475954   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.475960   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.475965   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.475970   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.475975   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.475985   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.475994   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.476008   85759 system_pods.go:74] duration metric: took 3.773548162s to wait for pod list to return data ...
	I1104 12:12:13.476020   85759 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:13.478598   85759 default_sa.go:45] found service account: "default"
	I1104 12:12:13.478618   85759 default_sa.go:55] duration metric: took 2.591186ms for default service account to be created ...
	I1104 12:12:13.478628   85759 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:13.483285   85759 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:13.483308   85759 system_pods.go:89] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.483314   85759 system_pods.go:89] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.483318   85759 system_pods.go:89] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.483322   85759 system_pods.go:89] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.483325   85759 system_pods.go:89] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.483329   85759 system_pods.go:89] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.483336   85759 system_pods.go:89] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.483340   85759 system_pods.go:89] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.483347   85759 system_pods.go:126] duration metric: took 4.713256ms to wait for k8s-apps to be running ...
	I1104 12:12:13.483355   85759 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:13.483398   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:13.497748   85759 system_svc.go:56] duration metric: took 14.381722ms WaitForService to wait for kubelet
	I1104 12:12:13.497812   85759 kubeadm.go:582] duration metric: took 4m23.411218278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:13.497843   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:13.500813   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:13.500833   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:13.500843   85759 node_conditions.go:105] duration metric: took 2.993771ms to run NodePressure ...
	I1104 12:12:13.500854   85759 start.go:241] waiting for startup goroutines ...
	I1104 12:12:13.500860   85759 start.go:246] waiting for cluster config update ...
	I1104 12:12:13.500870   85759 start.go:255] writing updated cluster config ...
	I1104 12:12:13.501122   85759 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:13.548293   85759 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:13.550203   85759 out.go:177] * Done! kubectl is now configured to use "embed-certs-325116" cluster and "default" namespace by default
	I1104 12:12:10.707746   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:12.708477   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.555266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:13.555498   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.924105   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:11.936623   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:11.936685   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:11.968026   86402 cri.go:89] found id: ""
	I1104 12:12:11.968056   86402 logs.go:282] 0 containers: []
	W1104 12:12:11.968067   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:11.968074   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:11.968139   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:12.001193   86402 cri.go:89] found id: ""
	I1104 12:12:12.001218   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.001245   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:12.001252   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:12.001311   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:12.035167   86402 cri.go:89] found id: ""
	I1104 12:12:12.035190   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.035199   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:12.035204   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:12.035250   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:12.068412   86402 cri.go:89] found id: ""
	I1104 12:12:12.068440   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.068450   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:12.068458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:12.068515   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:12.099965   86402 cri.go:89] found id: ""
	I1104 12:12:12.099991   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.100002   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:12.100009   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:12.100066   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:12.133413   86402 cri.go:89] found id: ""
	I1104 12:12:12.133442   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.133453   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:12.133460   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:12.133520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:12.169007   86402 cri.go:89] found id: ""
	I1104 12:12:12.169036   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.169046   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:12.169053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:12.169112   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:12.200592   86402 cri.go:89] found id: ""
	I1104 12:12:12.200621   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.200635   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:12.200643   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:12.200657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:12.244609   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:12.244644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:12.299770   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:12.299804   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:12.324354   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:12.324395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:12.385605   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:12.385632   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:12.385661   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:14.964867   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:14.977918   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:14.977991   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:15.012865   86402 cri.go:89] found id: ""
	I1104 12:12:15.012894   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.012906   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:15.012913   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:15.012977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:15.046548   86402 cri.go:89] found id: ""
	I1104 12:12:15.046574   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.046583   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:15.046589   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:15.046636   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:15.079310   86402 cri.go:89] found id: ""
	I1104 12:12:15.079336   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.079347   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:15.079353   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:15.079412   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:15.110595   86402 cri.go:89] found id: ""
	I1104 12:12:15.110625   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.110636   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:15.110648   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:15.110716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:15.143362   86402 cri.go:89] found id: ""
	I1104 12:12:15.143391   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.143403   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:15.143410   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:15.143533   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:15.173973   86402 cri.go:89] found id: ""
	I1104 12:12:15.174000   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.174009   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:15.174017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:15.174081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:15.205021   86402 cri.go:89] found id: ""
	I1104 12:12:15.205049   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.205060   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:15.205067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:15.205113   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:15.240190   86402 cri.go:89] found id: ""
	I1104 12:12:15.240220   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.240231   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:15.240249   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:15.240263   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:15.290208   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:15.290241   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:15.305216   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:15.305258   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:15.375713   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:15.375735   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:15.375746   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:15.456517   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:15.456552   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:15.209380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:17.708299   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:16.056359   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:18.556166   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.050834   86301 pod_ready.go:82] duration metric: took 4m0.001048639s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:20.050863   86301 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:20.050874   86301 pod_ready.go:39] duration metric: took 4m5.585310983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:20.050889   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:20.050919   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:20.050968   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:20.088440   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.088466   86301 cri.go:89] found id: ""
	I1104 12:12:20.088476   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:20.088523   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.092502   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:20.092575   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:20.126599   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:20.126621   86301 cri.go:89] found id: ""
	I1104 12:12:20.126629   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:20.126687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.130617   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:20.130686   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:20.169664   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.169687   86301 cri.go:89] found id: ""
	I1104 12:12:20.169696   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:20.169750   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.173881   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:20.173920   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:20.209271   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.209292   86301 cri.go:89] found id: ""
	I1104 12:12:20.209299   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:20.209354   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.214187   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:20.214254   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:20.248683   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.248702   86301 cri.go:89] found id: ""
	I1104 12:12:20.248709   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:20.248757   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.252501   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:20.252574   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:20.286367   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:20.286406   86301 cri.go:89] found id: ""
	I1104 12:12:20.286415   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:20.286491   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:17.992855   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:18.011370   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:18.011446   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:18.054937   86402 cri.go:89] found id: ""
	I1104 12:12:18.054961   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.054968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:18.054974   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:18.055026   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:18.107769   86402 cri.go:89] found id: ""
	I1104 12:12:18.107802   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.107814   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:18.107821   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:18.107887   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:18.141932   86402 cri.go:89] found id: ""
	I1104 12:12:18.141959   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.141968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:18.141974   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:18.142021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:18.174322   86402 cri.go:89] found id: ""
	I1104 12:12:18.174345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.174353   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:18.174361   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:18.174514   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:18.206742   86402 cri.go:89] found id: ""
	I1104 12:12:18.206766   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.206776   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:18.206782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:18.206840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:18.240322   86402 cri.go:89] found id: ""
	I1104 12:12:18.240345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.240358   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:18.240363   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:18.240420   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:18.272081   86402 cri.go:89] found id: ""
	I1104 12:12:18.272110   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.272121   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:18.272128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:18.272211   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:18.308604   86402 cri.go:89] found id: ""
	I1104 12:12:18.308629   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.308637   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:18.308646   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:18.308655   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:18.392854   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:18.392892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:18.429632   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:18.429665   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:18.481082   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:18.481120   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:18.494730   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:18.494758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:18.562098   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.063223   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:21.075655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:21.075714   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:21.117762   86402 cri.go:89] found id: ""
	I1104 12:12:21.117794   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.117807   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:21.117817   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:21.117881   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:21.153256   86402 cri.go:89] found id: ""
	I1104 12:12:21.153281   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.153289   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:21.153295   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:21.153355   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:21.191477   86402 cri.go:89] found id: ""
	I1104 12:12:21.191519   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.191539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:21.191547   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:21.191618   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:21.228378   86402 cri.go:89] found id: ""
	I1104 12:12:21.228411   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.228424   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:21.228431   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:21.228495   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:21.265452   86402 cri.go:89] found id: ""
	I1104 12:12:21.265483   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.265493   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:21.265501   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:21.265561   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:21.301073   86402 cri.go:89] found id: ""
	I1104 12:12:21.301099   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.301108   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:21.301114   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:21.301182   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:21.337952   86402 cri.go:89] found id: ""
	I1104 12:12:21.337977   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.337986   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:21.337996   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:21.338053   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:21.371895   86402 cri.go:89] found id: ""
	I1104 12:12:21.371920   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.371929   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:21.371937   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:21.371950   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:21.429757   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:21.429789   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:21.444365   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.444418   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:21.510971   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.510990   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:21.511002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.593605   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:21.593639   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.208004   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:22.706901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:24.708795   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.290832   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:20.290885   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:20.324359   86301 cri.go:89] found id: ""
	I1104 12:12:20.324383   86301 logs.go:282] 0 containers: []
	W1104 12:12:20.324391   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:20.324397   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:20.324442   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:20.364466   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.364488   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:20.364492   86301 cri.go:89] found id: ""
	I1104 12:12:20.364500   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:20.364557   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.368440   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.371967   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:20.371991   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.405547   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:20.405572   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.446936   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:20.446962   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.485811   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:20.485838   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.530775   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:20.530803   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:20.599495   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:20.599542   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:20.614511   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:20.614543   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.659277   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:20.659316   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.694675   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:20.694707   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.187670   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.187705   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:21.308477   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:21.308501   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:21.365526   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:21.365562   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:21.431350   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:21.431381   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:23.969966   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:23.984866   86301 api_server.go:72] duration metric: took 4m16.75797908s to wait for apiserver process to appear ...
	I1104 12:12:23.984895   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:23.984937   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:23.984989   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:24.022326   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.022348   86301 cri.go:89] found id: ""
	I1104 12:12:24.022357   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:24.022428   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.027288   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:24.027377   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:24.064963   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.064986   86301 cri.go:89] found id: ""
	I1104 12:12:24.064993   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:24.065045   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.072027   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:24.072089   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:24.106618   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.106648   86301 cri.go:89] found id: ""
	I1104 12:12:24.106659   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:24.106719   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.110696   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:24.110762   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:24.148575   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:24.148600   86301 cri.go:89] found id: ""
	I1104 12:12:24.148621   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:24.148687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.152673   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:24.152741   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:24.187739   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:24.187763   86301 cri.go:89] found id: ""
	I1104 12:12:24.187771   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:24.187817   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.191551   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:24.191610   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:24.229634   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.229656   86301 cri.go:89] found id: ""
	I1104 12:12:24.229667   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:24.229720   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.234342   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:24.234426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:24.268339   86301 cri.go:89] found id: ""
	I1104 12:12:24.268363   86301 logs.go:282] 0 containers: []
	W1104 12:12:24.268370   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:24.268375   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:24.268426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:24.302347   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.302369   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.302374   86301 cri.go:89] found id: ""
	I1104 12:12:24.302382   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:24.302446   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.306761   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.310867   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:24.310888   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.353396   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:24.353421   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.408025   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:24.408054   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.446150   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:24.446177   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:24.495479   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:24.495505   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:24.568973   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:24.569008   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:24.585522   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:24.585552   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.630483   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:24.630516   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.675828   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:24.675865   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:25.094412   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:25.094457   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:25.191547   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:25.191576   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:25.227482   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:25.227509   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:25.261150   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:25.261184   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.130961   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:24.143387   86402 kubeadm.go:597] duration metric: took 4m4.25221988s to restartPrimaryControlPlane
	W1104 12:12:24.143472   86402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1104 12:12:24.143499   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:12:27.207964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:29.208705   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:27.799329   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:12:27.803543   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:12:27.804545   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:27.804568   86301 api_server.go:131] duration metric: took 3.819666619s to wait for apiserver health ...
	I1104 12:12:27.804576   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:27.804596   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:27.804639   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:27.842317   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:27.842339   86301 cri.go:89] found id: ""
	I1104 12:12:27.842348   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:27.842403   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.846107   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:27.846167   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:27.878833   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:27.878854   86301 cri.go:89] found id: ""
	I1104 12:12:27.878864   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:27.878923   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.882562   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:27.882614   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:27.914077   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:27.914098   86301 cri.go:89] found id: ""
	I1104 12:12:27.914106   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:27.914150   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.917756   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:27.917807   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:27.949534   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:27.949555   86301 cri.go:89] found id: ""
	I1104 12:12:27.949562   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:27.949606   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.953176   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:27.953235   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:27.984491   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:27.984509   86301 cri.go:89] found id: ""
	I1104 12:12:27.984516   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:27.984566   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.988283   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:27.988342   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:28.022752   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.022775   86301 cri.go:89] found id: ""
	I1104 12:12:28.022783   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:28.022829   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.026702   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:28.026767   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:28.062501   86301 cri.go:89] found id: ""
	I1104 12:12:28.062534   86301 logs.go:282] 0 containers: []
	W1104 12:12:28.062545   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:28.062556   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:28.062608   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:28.097167   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.097195   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.097201   86301 cri.go:89] found id: ""
	I1104 12:12:28.097211   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:28.097276   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.101192   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.104712   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:28.104731   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:28.118886   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:28.118911   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:28.220480   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:28.220512   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:28.264205   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:28.264239   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:28.299241   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:28.299274   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:28.339817   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:28.339847   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:28.377987   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:28.378014   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:28.416746   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:28.416772   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:28.484743   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:28.484777   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:28.532089   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:28.532128   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.589039   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:28.589072   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.623955   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:28.623987   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.657953   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:28.657986   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:31.547595   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:31.547624   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.547629   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.547633   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.547637   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.547640   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.547643   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.547649   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.547653   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.547661   86301 system_pods.go:74] duration metric: took 3.743079115s to wait for pod list to return data ...
	I1104 12:12:31.547667   86301 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:31.550088   86301 default_sa.go:45] found service account: "default"
	I1104 12:12:31.550108   86301 default_sa.go:55] duration metric: took 2.435317ms for default service account to be created ...
	I1104 12:12:31.550114   86301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:31.554898   86301 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:31.554924   86301 system_pods.go:89] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.554929   86301 system_pods.go:89] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.554933   86301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.554937   86301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.554941   86301 system_pods.go:89] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.554945   86301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.554952   86301 system_pods.go:89] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.554955   86301 system_pods.go:89] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.554962   86301 system_pods.go:126] duration metric: took 4.842911ms to wait for k8s-apps to be running ...
	I1104 12:12:31.554968   86301 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:31.555008   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:31.568927   86301 system_svc.go:56] duration metric: took 13.948557ms WaitForService to wait for kubelet
	I1104 12:12:31.568958   86301 kubeadm.go:582] duration metric: took 4m24.342075873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:31.568987   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:31.571962   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:31.571983   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:31.571993   86301 node_conditions.go:105] duration metric: took 3.000591ms to run NodePressure ...
	I1104 12:12:31.572004   86301 start.go:241] waiting for startup goroutines ...
	I1104 12:12:31.572010   86301 start.go:246] waiting for cluster config update ...
	I1104 12:12:31.572019   86301 start.go:255] writing updated cluster config ...
	I1104 12:12:31.572277   86301 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:31.620935   86301 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:31.623672   86301 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-036892" cluster and "default" namespace by default
	I1104 12:12:28.876306   86402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.732783523s)
	I1104 12:12:28.876377   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:28.890455   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:12:28.899660   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:12:28.908658   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:12:28.908675   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:12:28.908715   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:12:28.916955   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:12:28.917013   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:12:28.927198   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:12:28.936868   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:12:28.936924   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:12:28.947246   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.956962   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:12:28.957015   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.967293   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:12:28.976975   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:12:28.977030   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:12:28.988547   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:12:29.198333   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:12:31.709511   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:34.207341   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:36.707962   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:39.208138   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:41.208806   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:43.707896   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:46.207316   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:48.707107   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:50.707644   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:52.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:54.708517   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:57.206564   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:59.207122   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:01.207195   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:03.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:05.707763   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:07.708314   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:09.708374   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:10.702085   85500 pod_ready.go:82] duration metric: took 4m0.000587313s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:13:10.702115   85500 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:13:10.702126   85500 pod_ready.go:39] duration metric: took 4m5.542549912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:13:10.702144   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:13:10.702191   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:10.702246   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:10.743079   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:10.743102   85500 cri.go:89] found id: ""
	I1104 12:13:10.743110   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:10.743176   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.747213   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:10.747275   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:10.781435   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:10.781465   85500 cri.go:89] found id: ""
	I1104 12:13:10.781474   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:10.781597   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.785383   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:10.785453   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:10.825927   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:10.825956   85500 cri.go:89] found id: ""
	I1104 12:13:10.825965   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:10.826023   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.829834   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:10.829899   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:10.872447   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:10.872468   85500 cri.go:89] found id: ""
	I1104 12:13:10.872475   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:10.872524   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.876428   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:10.876483   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:10.911092   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:10.911125   85500 cri.go:89] found id: ""
	I1104 12:13:10.911134   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:10.911190   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.915021   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:10.915076   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:10.950838   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:10.950863   85500 cri.go:89] found id: ""
	I1104 12:13:10.950873   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:10.950935   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.954889   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:10.954938   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:10.991580   85500 cri.go:89] found id: ""
	I1104 12:13:10.991609   85500 logs.go:282] 0 containers: []
	W1104 12:13:10.991618   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:10.991625   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:10.991689   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:11.031428   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.031469   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.031474   85500 cri.go:89] found id: ""
	I1104 12:13:11.031484   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:11.031557   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.035810   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.039555   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:11.039582   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:11.076837   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:11.076865   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:11.114534   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:11.114561   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:11.148897   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:11.148935   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.184480   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:11.184511   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:11.256197   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:11.256237   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:11.368984   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:11.369014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:11.414219   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:11.414253   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:11.455746   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:11.455776   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.491699   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:11.491726   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:11.962368   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:11.962400   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:11.975564   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:11.975590   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:12.031427   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:12.031461   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:14.572933   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:13:14.588140   85500 api_server.go:72] duration metric: took 4m17.141131339s to wait for apiserver process to appear ...
	I1104 12:13:14.588168   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:13:14.588196   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:14.588243   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:14.621509   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:14.621534   85500 cri.go:89] found id: ""
	I1104 12:13:14.621543   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:14.621601   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.626328   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:14.626384   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:14.662052   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:14.662079   85500 cri.go:89] found id: ""
	I1104 12:13:14.662115   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:14.662174   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.666018   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:14.666089   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:14.702872   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:14.702897   85500 cri.go:89] found id: ""
	I1104 12:13:14.702910   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:14.702968   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.706809   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:14.706883   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:14.744985   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:14.745005   85500 cri.go:89] found id: ""
	I1104 12:13:14.745012   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:14.745058   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.749441   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:14.749497   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:14.781617   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:14.781644   85500 cri.go:89] found id: ""
	I1104 12:13:14.781653   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:14.781709   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.785971   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:14.786046   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:14.819002   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:14.819029   85500 cri.go:89] found id: ""
	I1104 12:13:14.819038   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:14.819101   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.823075   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:14.823143   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:14.858936   85500 cri.go:89] found id: ""
	I1104 12:13:14.858965   85500 logs.go:282] 0 containers: []
	W1104 12:13:14.858977   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:14.858984   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:14.859048   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:14.898303   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:14.898327   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:14.898333   85500 cri.go:89] found id: ""
	I1104 12:13:14.898341   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:14.898402   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.902325   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.905855   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:14.905880   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:14.973356   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:14.973389   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:14.988655   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:14.988696   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:15.023407   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:15.023443   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:15.078974   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:15.079007   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:15.114147   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:15.114180   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:15.559434   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:15.559477   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:15.666481   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:15.666509   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:15.728066   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:15.728101   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:15.769721   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:15.769759   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:15.802131   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:15.802170   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:15.837613   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:15.837639   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:15.874374   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:15.874407   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:18.413199   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:13:18.418522   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:13:18.419487   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:13:18.419512   85500 api_server.go:131] duration metric: took 3.831337085s to wait for apiserver health ...
	I1104 12:13:18.419521   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:13:18.419549   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:18.419605   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:18.453835   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:18.453856   85500 cri.go:89] found id: ""
	I1104 12:13:18.453865   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:18.453927   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.458136   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:18.458198   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:18.496587   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:18.496623   85500 cri.go:89] found id: ""
	I1104 12:13:18.496634   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:18.496691   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.500451   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:18.500523   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:18.532756   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:18.532785   85500 cri.go:89] found id: ""
	I1104 12:13:18.532795   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:18.532857   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.537239   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:18.537293   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:18.569348   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:18.569374   85500 cri.go:89] found id: ""
	I1104 12:13:18.569382   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:18.569440   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.573491   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:18.573563   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:18.606857   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:18.606886   85500 cri.go:89] found id: ""
	I1104 12:13:18.606896   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:18.606951   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.611158   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:18.611229   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:18.645448   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:18.645467   85500 cri.go:89] found id: ""
	I1104 12:13:18.645474   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:18.645527   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.649014   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:18.649062   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:18.693641   85500 cri.go:89] found id: ""
	I1104 12:13:18.693668   85500 logs.go:282] 0 containers: []
	W1104 12:13:18.693676   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:18.693681   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:18.693728   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:18.733668   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:18.733690   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:18.733695   85500 cri.go:89] found id: ""
	I1104 12:13:18.733702   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:18.733745   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.737419   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.740993   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:18.741014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:19.135942   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:19.135980   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:19.206586   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:19.206623   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:19.222135   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:19.222164   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:19.262746   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:19.262774   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:19.298259   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:19.298287   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:19.338304   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:19.338332   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:19.375163   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:19.375195   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:19.478206   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:19.478234   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:19.526261   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:19.526291   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:19.559922   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:19.559954   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:19.609848   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:19.609879   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:19.648804   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:19.648829   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:22.210690   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:13:22.210718   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.210723   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.210727   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.210730   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.210733   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.210737   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.210752   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.210758   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.210768   85500 system_pods.go:74] duration metric: took 3.791240483s to wait for pod list to return data ...
	I1104 12:13:22.210780   85500 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:13:22.213688   85500 default_sa.go:45] found service account: "default"
	I1104 12:13:22.213709   85500 default_sa.go:55] duration metric: took 2.921691ms for default service account to be created ...
	I1104 12:13:22.213717   85500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:13:22.219436   85500 system_pods.go:86] 8 kube-system pods found
	I1104 12:13:22.219466   85500 system_pods.go:89] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.219475   85500 system_pods.go:89] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.219480   85500 system_pods.go:89] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.219489   85500 system_pods.go:89] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.219495   85500 system_pods.go:89] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.219501   85500 system_pods.go:89] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.219512   85500 system_pods.go:89] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.219523   85500 system_pods.go:89] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.219537   85500 system_pods.go:126] duration metric: took 5.813462ms to wait for k8s-apps to be running ...
	I1104 12:13:22.219551   85500 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:13:22.219612   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:13:22.232887   85500 system_svc.go:56] duration metric: took 13.328078ms WaitForService to wait for kubelet
	I1104 12:13:22.232918   85500 kubeadm.go:582] duration metric: took 4m24.785911082s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:13:22.232941   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:13:22.235641   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:13:22.235662   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:13:22.235675   85500 node_conditions.go:105] duration metric: took 2.728232ms to run NodePressure ...
	I1104 12:13:22.235687   85500 start.go:241] waiting for startup goroutines ...
	I1104 12:13:22.235695   85500 start.go:246] waiting for cluster config update ...
	I1104 12:13:22.235707   85500 start.go:255] writing updated cluster config ...
	I1104 12:13:22.235962   85500 ssh_runner.go:195] Run: rm -f paused
	I1104 12:13:22.284583   85500 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:13:22.287448   85500 out.go:177] * Done! kubectl is now configured to use "no-preload-908370" cluster and "default" namespace by default
	I1104 12:14:25.090113   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:14:25.090254   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:14:25.091997   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.092065   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.092204   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.092341   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.092480   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:25.092569   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:25.094485   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:25.094582   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:25.094664   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:25.094799   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:25.094891   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:25.095003   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:25.095086   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:25.095186   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:25.095240   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:25.095319   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:25.095403   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:25.095481   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:25.095554   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:25.095614   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:25.095676   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:25.095752   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:25.095828   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:25.095970   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:25.096102   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:25.096169   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:25.096262   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:25.097799   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:25.097920   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:25.098018   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:25.098126   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:25.098211   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:25.098333   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:14:25.098393   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:14:25.098487   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098633   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.098690   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098940   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099074   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099307   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099370   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099528   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099582   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099740   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099758   86402 kubeadm.go:310] 
	I1104 12:14:25.099815   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:14:25.099880   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:14:25.099889   86402 kubeadm.go:310] 
	I1104 12:14:25.099923   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:14:25.099952   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:14:25.100036   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:14:25.100044   86402 kubeadm.go:310] 
	I1104 12:14:25.100197   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:14:25.100237   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:14:25.100267   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:14:25.100273   86402 kubeadm.go:310] 
	I1104 12:14:25.100367   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:14:25.100454   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:14:25.100468   86402 kubeadm.go:310] 
	I1104 12:14:25.100600   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:14:25.100718   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:14:25.100821   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:14:25.100903   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:14:25.100970   86402 kubeadm.go:310] 
	W1104 12:14:25.101033   86402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:14:25.101071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:14:25.536184   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:14:25.550453   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:14:25.560308   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:14:25.560327   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:14:25.560368   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:14:25.569106   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:14:25.569189   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:14:25.578395   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:14:25.587402   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:14:25.587473   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:14:25.596827   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.605359   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:14:25.605420   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.614266   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:14:25.622522   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:14:25.622582   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:14:25.631876   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:14:25.701080   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.701168   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.833997   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.834138   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.834258   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:26.009165   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:26.011976   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:26.012090   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:26.012183   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:26.012333   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:26.012422   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:26.012532   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:26.012619   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:26.012689   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:26.012748   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:26.012851   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:26.012978   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:26.013025   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:26.013102   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:26.399153   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:26.470449   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:27.078991   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:27.181622   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:27.205149   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:27.205300   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:27.205383   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:27.355614   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:27.357678   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:27.357840   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:27.363942   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:27.365004   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:27.367237   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:27.368087   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:15:07.369845   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:15:07.370222   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:07.370464   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:12.370802   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:12.371041   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:22.371417   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:22.371584   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:42.371725   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:42.371932   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.370871   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:16:22.371150   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.371181   86402 kubeadm.go:310] 
	I1104 12:16:22.371222   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:16:22.371297   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:16:22.371309   86402 kubeadm.go:310] 
	I1104 12:16:22.371371   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:16:22.371435   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:16:22.371576   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:16:22.371588   86402 kubeadm.go:310] 
	I1104 12:16:22.371726   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:16:22.371784   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:16:22.371863   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:16:22.371879   86402 kubeadm.go:310] 
	I1104 12:16:22.372004   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:16:22.372155   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:16:22.372172   86402 kubeadm.go:310] 
	I1104 12:16:22.372338   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:16:22.372435   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:16:22.372566   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:16:22.372680   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:16:22.372718   86402 kubeadm.go:310] 
	I1104 12:16:22.372948   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:16:22.373110   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:16:22.373289   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:16:22.373328   86402 kubeadm.go:394] duration metric: took 8m2.53443537s to StartCluster
	I1104 12:16:22.373379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:16:22.373431   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:16:22.410373   86402 cri.go:89] found id: ""
	I1104 12:16:22.410409   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.410418   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:16:22.410424   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:16:22.410485   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:16:22.447939   86402 cri.go:89] found id: ""
	I1104 12:16:22.447963   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.447971   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:16:22.447977   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:16:22.448021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:16:22.479234   86402 cri.go:89] found id: ""
	I1104 12:16:22.479263   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.479274   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:16:22.479280   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:16:22.479341   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:16:22.512783   86402 cri.go:89] found id: ""
	I1104 12:16:22.512814   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.512825   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:16:22.512832   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:16:22.512895   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:16:22.549483   86402 cri.go:89] found id: ""
	I1104 12:16:22.549510   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.549520   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:16:22.549527   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:16:22.549593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:16:22.582339   86402 cri.go:89] found id: ""
	I1104 12:16:22.582382   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.582393   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:16:22.582402   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:16:22.582471   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:16:22.613545   86402 cri.go:89] found id: ""
	I1104 12:16:22.613574   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.613585   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:16:22.613593   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:16:22.613656   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:16:22.644488   86402 cri.go:89] found id: ""
	I1104 12:16:22.644517   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.644528   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:16:22.644539   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:16:22.644551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:16:22.681138   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:16:22.681169   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:16:22.734551   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:16:22.734586   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:16:22.750140   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:16:22.750178   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:16:22.837631   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:16:22.837657   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:16:22.837673   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1104 12:16:22.961154   86402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:16:22.961221   86402 out.go:270] * 
	W1104 12:16:22.961295   86402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.961310   86402 out.go:270] * 
	W1104 12:16:22.962053   86402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:16:22.965021   86402 out.go:201] 
	W1104 12:16:22.966262   86402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.966326   86402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:16:22.966377   86402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:16:22.967953   86402 out.go:201] 
	
	
	==> CRI-O <==
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.302755961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722944302735266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62768838-2557-4b52-bcd6-17f014f71f7b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.303925600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64334bca-467d-41f4-8d0e-0c9841aec874 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.303991265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64334bca-467d-41f4-8d0e-0c9841aec874 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.304195088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64334bca-467d-41f4-8d0e-0c9841aec874 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.341745202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e17883f-e798-49c6-bb94-d1c42d857cf2 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.341828418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e17883f-e798-49c6-bb94-d1c42d857cf2 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.344343319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc0f907f-b350-4ed2-ade4-930f3323b58c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.344728704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722944344700398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc0f907f-b350-4ed2-ade4-930f3323b58c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.345625697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ac67775-f2f7-4324-9dd4-578991e44924 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.345695245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ac67775-f2f7-4324-9dd4-578991e44924 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.345925819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ac67775-f2f7-4324-9dd4-578991e44924 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.385863798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02f0296d-f83c-4fd1-bb91-b71c5d022463 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.385979662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02f0296d-f83c-4fd1-bb91-b71c5d022463 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.387576184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf57361d-9f3c-4f9b-99dc-3ab6487db94e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.388068198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722944388034187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf57361d-9f3c-4f9b-99dc-3ab6487db94e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.388868364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddd52748-a032-4828-abbb-9d1c22f80701 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.388958090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddd52748-a032-4828-abbb-9d1c22f80701 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.389220735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ddd52748-a032-4828-abbb-9d1c22f80701 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.426243335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0439f82f-2f60-4a02-bf2d-9610081e28ce name=/runtime.v1.RuntimeService/Version
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.426343307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0439f82f-2f60-4a02-bf2d-9610081e28ce name=/runtime.v1.RuntimeService/Version
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.428196591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec6531c9-1783-482f-8b6d-29e3cc468287 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.428919349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722944428894830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec6531c9-1783-482f-8b6d-29e3cc468287 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.429449446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d43af68-7ec1-4263-98e3-00490bb5c419 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.429528194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d43af68-7ec1-4263-98e3-00490bb5c419 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:22:24 no-preload-908370 crio[703]: time="2024-11-04 12:22:24.429751461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d43af68-7ec1-4263-98e3-00490bb5c419 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4f6c824f92ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   71b9c2ed6c6e1       storage-provisioner
	d88fa3ae4d36b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   0d05f2ac43650       busybox
	6dcd134432963       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   7933cfebeb6af       coredns-7c65d6cfc9-vv4kq
	33418a9cb2f8a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   9941f6065c006       kube-proxy-w9hbz
	162e3330ff77f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   71b9c2ed6c6e1       storage-provisioner
	e74398c77b3ca       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   1ea43d435da91       kube-apiserver-no-preload-908370
	1390676564c7e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   033e135e95f2c       etcd-no-preload-908370
	9c3fa7870c724       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   52f547f09dd1b       kube-controller-manager-no-preload-908370
	5546d06c4d51e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   c02100f7b4561       kube-scheduler-no-preload-908370
	
	
	==> coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57837 - 23655 "HINFO IN 6065787258555663794.2382023106679684931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045711979s
	
	
	==> describe nodes <==
	Name:               no-preload-908370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-908370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=no-preload-908370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T11_59_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 11:59:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-908370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 12:22:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 12:19:36 +0000   Mon, 04 Nov 2024 11:59:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 12:19:36 +0000   Mon, 04 Nov 2024 11:59:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 12:19:36 +0000   Mon, 04 Nov 2024 11:59:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 12:19:36 +0000   Mon, 04 Nov 2024 12:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.91
	  Hostname:    no-preload-908370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3c247408ace4ec48e7dca6349f98e18
	  System UUID:                d3c24740-8ace-4ec4-8e7d-ca6349f98e18
	  Boot ID:                    8b562791-7b0f-4c3e-8b7e-0c9c5aabd773
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-vv4kq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-908370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-908370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-908370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-w9hbz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-908370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-2lxlg              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-908370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-908370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-908370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-908370 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-908370 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-908370 event: Registered Node no-preload-908370 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-908370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-908370 event: Registered Node no-preload-908370 in Controller
	
	
	==> dmesg <==
	[Nov 4 12:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049425] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038929] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.132883] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.838836] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.538809] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.085643] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.058621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070153] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.200310] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.096820] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.259802] systemd-fstab-generator[694]: Ignoring "noauto" option for root device
	[ +15.224691] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.059593] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.527010] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +3.769282] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.695492] systemd-fstab-generator[2054]: Ignoring "noauto" option for root device
	[Nov 4 12:09] kauditd_printk_skb: 61 callbacks suppressed
	[ +25.176073] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] <==
	{"level":"info","ts":"2024-11-04T12:08:51.613685Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.91:2380"}
	{"level":"info","ts":"2024-11-04T12:08:51.615427Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.91:2380"}
	{"level":"info","ts":"2024-11-04T12:08:51.617982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 switched to configuration voters=(4878114471875268759)"}
	{"level":"info","ts":"2024-11-04T12:08:51.621518Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da56312a125ec6d7","local-member-id":"43b28b444dd15097","added-peer-id":"43b28b444dd15097","added-peer-peer-urls":["https://192.168.61.91:2380"]}
	{"level":"info","ts":"2024-11-04T12:08:51.621663Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da56312a125ec6d7","local-member-id":"43b28b444dd15097","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-04T12:08:51.621712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-04T12:08:53.263462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 is starting a new election at term 2"}
	{"level":"info","ts":"2024-11-04T12:08:53.263521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-04T12:08:53.263559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 received MsgPreVoteResp from 43b28b444dd15097 at term 2"}
	{"level":"info","ts":"2024-11-04T12:08:53.263573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 became candidate at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.263578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 received MsgVoteResp from 43b28b444dd15097 at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.263587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 became leader at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.263594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 43b28b444dd15097 elected leader 43b28b444dd15097 at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.280988Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"43b28b444dd15097","local-member-attributes":"{Name:no-preload-908370 ClientURLs:[https://192.168.61.91:2379]}","request-path":"/0/members/43b28b444dd15097/attributes","cluster-id":"da56312a125ec6d7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-04T12:08:53.280999Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T12:08:53.281178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T12:08:53.281568Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-04T12:08:53.281623Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-04T12:08:53.282194Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-04T12:08:53.282204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-04T12:08:53.283377Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.91:2379"}
	{"level":"info","ts":"2024-11-04T12:08:53.283977Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-04T12:18:53.313861Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2024-11-04T12:18:53.322016Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":851,"took":"7.699676ms","hash":2669187108,"current-db-size-bytes":2678784,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2678784,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-11-04T12:18:53.322106Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2669187108,"revision":851,"compact-revision":-1}
	
	
	==> kernel <==
	 12:22:24 up 14 min,  0 users,  load average: 0.32, 0.22, 0.16
	Linux no-preload-908370 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] <==
	W1104 12:18:55.562368       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:18:55.562566       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:18:55.563674       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:18:55.563761       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:19:55.564488       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:19:55.564537       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1104 12:19:55.564596       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:19:55.564622       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:19:55.565674       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:19:55.565738       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:21:55.566348       1 handler_proxy.go:99] no RequestInfo found in the context
	W1104 12:21:55.566559       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:21:55.566565       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1104 12:21:55.566658       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:21:55.567846       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:21:55.567904       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] <==
	E1104 12:16:58.162124       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:16:58.626222       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:17:28.167540       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:17:28.633113       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:17:58.172913       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:17:58.640796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:18:28.177776       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:18:28.648479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:18:58.183135       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:18:58.656170       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:19:28.188667       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:19:28.664130       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:19:36.927292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-908370"
	E1104 12:19:58.195081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:19:58.672792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:20:11.286609       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="112.233µs"
	I1104 12:20:23.286553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="83.661µs"
	E1104 12:20:28.200693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:20:28.680101       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:20:58.206727       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:20:58.688715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:21:28.212062       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:21:28.697836       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:21:58.219318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:21:58.705290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 12:08:55.968629       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 12:08:55.978964       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.91"]
	E1104 12:08:55.979023       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 12:08:56.040533       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 12:08:56.040616       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 12:08:56.040653       1 server_linux.go:169] "Using iptables Proxier"
	I1104 12:08:56.044549       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 12:08:56.045493       1 server.go:483] "Version info" version="v1.31.2"
	I1104 12:08:56.045581       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:56.053060       1 config.go:199] "Starting service config controller"
	I1104 12:08:56.053128       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 12:08:56.053166       1 config.go:105] "Starting endpoint slice config controller"
	I1104 12:08:56.053182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 12:08:56.054946       1 config.go:328] "Starting node config controller"
	I1104 12:08:56.054997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 12:08:56.153617       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 12:08:56.153738       1 shared_informer.go:320] Caches are synced for service config
	I1104 12:08:56.155156       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] <==
	I1104 12:08:52.195934       1 serving.go:386] Generated self-signed cert in-memory
	W1104 12:08:54.474953       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 12:08:54.475050       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 12:08:54.475063       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 12:08:54.475070       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 12:08:54.607183       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1104 12:08:54.607212       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:54.609870       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 12:08:54.609986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1104 12:08:54.610207       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 12:08:54.610223       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1104 12:08:54.711129       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 12:21:11 no-preload-908370 kubelet[1425]: E1104 12:21:11.273128    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:21:20 no-preload-908370 kubelet[1425]: E1104 12:21:20.443758    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722880443041839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:20 no-preload-908370 kubelet[1425]: E1104 12:21:20.443795    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722880443041839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:22 no-preload-908370 kubelet[1425]: E1104 12:21:22.273566    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:21:30 no-preload-908370 kubelet[1425]: E1104 12:21:30.445343    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722890445016906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:30 no-preload-908370 kubelet[1425]: E1104 12:21:30.445687    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722890445016906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:34 no-preload-908370 kubelet[1425]: E1104 12:21:34.273481    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:21:40 no-preload-908370 kubelet[1425]: E1104 12:21:40.447314    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722900446879707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:40 no-preload-908370 kubelet[1425]: E1104 12:21:40.447816    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722900446879707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:46 no-preload-908370 kubelet[1425]: E1104 12:21:46.273486    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:21:50 no-preload-908370 kubelet[1425]: E1104 12:21:50.294806    1425 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 12:21:50 no-preload-908370 kubelet[1425]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 12:21:50 no-preload-908370 kubelet[1425]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 12:21:50 no-preload-908370 kubelet[1425]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 12:21:50 no-preload-908370 kubelet[1425]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 12:21:50 no-preload-908370 kubelet[1425]: E1104 12:21:50.450152    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722910449607589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:21:50 no-preload-908370 kubelet[1425]: E1104 12:21:50.450201    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722910449607589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:22:00 no-preload-908370 kubelet[1425]: E1104 12:22:00.452060    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722920451629159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:22:00 no-preload-908370 kubelet[1425]: E1104 12:22:00.452104    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722920451629159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:22:01 no-preload-908370 kubelet[1425]: E1104 12:22:01.273480    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:22:10 no-preload-908370 kubelet[1425]: E1104 12:22:10.453531    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722930453016450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:22:10 no-preload-908370 kubelet[1425]: E1104 12:22:10.455146    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722930453016450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:22:16 no-preload-908370 kubelet[1425]: E1104 12:22:16.272947    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:22:20 no-preload-908370 kubelet[1425]: E1104 12:22:20.457056    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722940456651592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:22:20 no-preload-908370 kubelet[1425]: E1104 12:22:20.457322    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730722940456651592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] <==
	I1104 12:08:55.798706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1104 12:09:25.811449       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] <==
	I1104 12:09:26.520284       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 12:09:26.529035       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 12:09:26.529216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 12:09:26.536744       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 12:09:26.537237       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4c3f43b-8157-4af6-9328-9b01a4a9eade", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-908370_3c50268b-57e2-4975-98d9-556c4271abb3 became leader
	I1104 12:09:26.537311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-908370_3c50268b-57e2-4975-98d9-556c4271abb3!
	I1104 12:09:26.637903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-908370_3c50268b-57e2-4975-98d9-556c4271abb3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-908370 -n no-preload-908370
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-908370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2lxlg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-908370 describe pod metrics-server-6867b74b74-2lxlg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-908370 describe pod metrics-server-6867b74b74-2lxlg: exit status 1 (65.438354ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2lxlg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-908370 describe pod metrics-server-6867b74b74-2lxlg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:16:33.165395   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:16:43.769546   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:17:13.892256   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:17:46.501270   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:18:01.267864   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:18:06.836617   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:18:20.019543   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:19:09.565280   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:19:24.332410   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:19:36.241750   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:19:43.082172   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:19:47.409152   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:19:47.536827   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:20:14.953420   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:20:50.828097   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:21:33.165054   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:21:38.016827   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:21:43.769195   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:22:46.501049   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:23:01.267578   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:23:20.019615   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:24:47.409045   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:24:47.536574   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:25:14.953817   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/calico-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (227.216862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-589257" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (227.148812ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-589257 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-589257 logs -n 25: (1.493864126s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:04:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:04:21.684777   86402 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:04:21.684885   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.684893   86402 out.go:358] Setting ErrFile to fd 2...
	I1104 12:04:21.684897   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.685085   86402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:04:21.685618   86402 out.go:352] Setting JSON to false
	I1104 12:04:21.686501   86402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10013,"bootTime":1730711849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:04:21.686603   86402 start.go:139] virtualization: kvm guest
	I1104 12:04:21.688652   86402 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:04:21.690121   86402 notify.go:220] Checking for updates...
	I1104 12:04:21.690173   86402 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:04:21.691712   86402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:04:21.693100   86402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:04:21.694334   86402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:04:21.695431   86402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:04:21.696680   86402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:04:21.698271   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:04:21.698697   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.698738   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.713382   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I1104 12:04:21.713861   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.714357   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.714378   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.714696   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.714872   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.716711   86402 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 12:04:21.718136   86402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:04:21.718573   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.718617   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.733074   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1104 12:04:21.733525   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.733939   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.733955   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.734252   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.734410   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.770049   86402 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:04:21.771735   86402 start.go:297] selected driver: kvm2
	I1104 12:04:21.771748   86402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.771851   86402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:04:21.772615   86402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.772709   86402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:04:21.787662   86402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:04:21.788158   86402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:04:21.788201   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:04:21.788238   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:04:21.788282   86402 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.788422   86402 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.790364   86402 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 12:04:20.849476   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:20.393451   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:04:20.393484   86301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:20.393492   86301 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:20.393580   86301 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:20.393594   86301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:04:20.393670   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:04:20.393863   86301 start.go:360] acquireMachinesLock for default-k8s-diff-port-036892: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:21.791568   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:04:21.791599   86402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:21.791608   86402 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:21.791668   86402 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:21.791678   86402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 12:04:21.791755   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:04:21.791918   86402 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:26.929512   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:30.001546   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:36.081486   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:39.153496   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:45.233535   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:48.305510   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:54.385555   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:57.457513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:03.537513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:06.609487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:12.689475   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:15.761508   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:21.841502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:24.913609   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:30.993499   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:34.065502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:40.145511   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:43.217478   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:49.297518   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:52.369526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:58.449509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:01.521498   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:07.601506   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:10.673509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:16.753487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:19.825549   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:25.905526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:28.977526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:35.057466   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:38.129670   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:44.209517   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:47.281541   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:53.361542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:56.433564   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:02.513462   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:05.585513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:11.665480   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:14.737542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:17.742001   85759 start.go:364] duration metric: took 4m26.438155925s to acquireMachinesLock for "embed-certs-325116"
	I1104 12:07:17.742060   85759 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:17.742068   85759 fix.go:54] fixHost starting: 
	I1104 12:07:17.742418   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:17.742470   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:17.758611   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I1104 12:07:17.759173   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:17.759750   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:17.759774   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:17.760116   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:17.760326   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:17.760498   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:17.762313   85759 fix.go:112] recreateIfNeeded on embed-certs-325116: state=Stopped err=<nil>
	I1104 12:07:17.762335   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	W1104 12:07:17.762469   85759 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:17.764411   85759 out.go:177] * Restarting existing kvm2 VM for "embed-certs-325116" ...
	I1104 12:07:17.739255   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:17.739306   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739691   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:07:17.739718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739888   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:07:17.741864   85500 machine.go:96] duration metric: took 4m37.421766695s to provisionDockerMachine
	I1104 12:07:17.741908   85500 fix.go:56] duration metric: took 4m37.442993443s for fixHost
	I1104 12:07:17.741918   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 4m37.443015642s
	W1104 12:07:17.741938   85500 start.go:714] error starting host: provision: host is not running
	W1104 12:07:17.742034   85500 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1104 12:07:17.742044   85500 start.go:729] Will try again in 5 seconds ...
	I1104 12:07:17.765958   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Start
	I1104 12:07:17.766220   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring networks are active...
	I1104 12:07:17.767191   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network default is active
	I1104 12:07:17.767589   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network mk-embed-certs-325116 is active
	I1104 12:07:17.767984   85759 main.go:141] libmachine: (embed-certs-325116) Getting domain xml...
	I1104 12:07:17.768804   85759 main.go:141] libmachine: (embed-certs-325116) Creating domain...
	I1104 12:07:18.996135   85759 main.go:141] libmachine: (embed-certs-325116) Waiting to get IP...
	I1104 12:07:18.997002   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:18.997542   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:18.997615   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:18.997513   87021 retry.go:31] will retry after 239.606839ms: waiting for machine to come up
	I1104 12:07:19.239054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.239579   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.239602   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.239528   87021 retry.go:31] will retry after 303.459257ms: waiting for machine to come up
	I1104 12:07:19.545134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.545597   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.545633   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.545544   87021 retry.go:31] will retry after 394.511523ms: waiting for machine to come up
	I1104 12:07:19.942226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.942607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.942630   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.942576   87021 retry.go:31] will retry after 381.618515ms: waiting for machine to come up
	I1104 12:07:20.326265   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.326707   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.326738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.326651   87021 retry.go:31] will retry after 584.226748ms: waiting for machine to come up
	I1104 12:07:20.912117   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.912575   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.912607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.912524   87021 retry.go:31] will retry after 770.080519ms: waiting for machine to come up
	I1104 12:07:22.742250   85500 start.go:360] acquireMachinesLock for no-preload-908370: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:07:21.684620   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:21.685074   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:21.685103   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:21.685026   87021 retry.go:31] will retry after 1.170018806s: waiting for machine to come up
	I1104 12:07:22.856736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:22.857104   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:22.857132   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:22.857048   87021 retry.go:31] will retry after 1.467304538s: waiting for machine to come up
	I1104 12:07:24.326735   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:24.327197   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:24.327220   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:24.327148   87021 retry.go:31] will retry after 1.676202737s: waiting for machine to come up
	I1104 12:07:26.006035   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:26.006515   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:26.006538   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:26.006460   87021 retry.go:31] will retry after 1.8778328s: waiting for machine to come up
	I1104 12:07:27.886226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:27.886634   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:27.886656   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:27.886579   87021 retry.go:31] will retry after 2.886548821s: waiting for machine to come up
	I1104 12:07:30.776677   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:30.777080   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:30.777102   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:30.777039   87021 retry.go:31] will retry after 3.108966144s: waiting for machine to come up
	I1104 12:07:35.049920   86301 start.go:364] duration metric: took 3m14.656022924s to acquireMachinesLock for "default-k8s-diff-port-036892"
	I1104 12:07:35.050007   86301 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:35.050019   86301 fix.go:54] fixHost starting: 
	I1104 12:07:35.050381   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:35.050436   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:35.067928   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I1104 12:07:35.068445   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:35.068953   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:07:35.068976   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:35.069353   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:35.069560   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:35.069692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:07:35.071231   86301 fix.go:112] recreateIfNeeded on default-k8s-diff-port-036892: state=Stopped err=<nil>
	I1104 12:07:35.071252   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	W1104 12:07:35.071401   86301 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:35.073762   86301 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-036892" ...
	I1104 12:07:35.075114   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Start
	I1104 12:07:35.075311   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring networks are active...
	I1104 12:07:35.076105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network default is active
	I1104 12:07:35.076534   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network mk-default-k8s-diff-port-036892 is active
	I1104 12:07:35.076946   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Getting domain xml...
	I1104 12:07:35.077641   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Creating domain...
	I1104 12:07:33.887738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888147   85759 main.go:141] libmachine: (embed-certs-325116) Found IP for machine: 192.168.39.47
	I1104 12:07:33.888176   85759 main.go:141] libmachine: (embed-certs-325116) Reserving static IP address...
	I1104 12:07:33.888206   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has current primary IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888737   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.888769   85759 main.go:141] libmachine: (embed-certs-325116) DBG | skip adding static IP to network mk-embed-certs-325116 - found existing host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"}
	I1104 12:07:33.888783   85759 main.go:141] libmachine: (embed-certs-325116) Reserved static IP address: 192.168.39.47
	I1104 12:07:33.888795   85759 main.go:141] libmachine: (embed-certs-325116) Waiting for SSH to be available...
	I1104 12:07:33.888812   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Getting to WaitForSSH function...
	I1104 12:07:33.891130   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891493   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.891520   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891670   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH client type: external
	I1104 12:07:33.891693   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa (-rw-------)
	I1104 12:07:33.891732   85759 main.go:141] libmachine: (embed-certs-325116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:33.891748   85759 main.go:141] libmachine: (embed-certs-325116) DBG | About to run SSH command:
	I1104 12:07:33.891773   85759 main.go:141] libmachine: (embed-certs-325116) DBG | exit 0
	I1104 12:07:34.012989   85759 main.go:141] libmachine: (embed-certs-325116) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:34.013457   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetConfigRaw
	I1104 12:07:34.014162   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.016645   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017028   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.017062   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017347   85759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/config.json ...
	I1104 12:07:34.017577   85759 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:34.017596   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.017824   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.020134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020416   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.020449   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020570   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.020745   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.020905   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.021059   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.021313   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.021505   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.021515   85759 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:34.125266   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:34.125305   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125556   85759 buildroot.go:166] provisioning hostname "embed-certs-325116"
	I1104 12:07:34.125583   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125781   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.128180   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128486   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.128514   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128603   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.128758   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128890   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.129166   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.129371   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.129394   85759 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-325116 && echo "embed-certs-325116" | sudo tee /etc/hostname
	I1104 12:07:34.242027   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-325116
	
	I1104 12:07:34.242054   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.244671   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.244984   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.245019   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.245159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.245337   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245514   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245661   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.245810   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.245971   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.245986   85759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-325116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-325116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-325116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:34.357178   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:34.357204   85759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:34.357220   85759 buildroot.go:174] setting up certificates
	I1104 12:07:34.357241   85759 provision.go:84] configureAuth start
	I1104 12:07:34.357250   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.357533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.359993   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360308   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.360327   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.362459   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362750   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.362786   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362932   85759 provision.go:143] copyHostCerts
	I1104 12:07:34.362986   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:34.363022   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:34.363109   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:34.363231   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:34.363242   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:34.363282   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:34.363357   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:34.363368   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:34.363399   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:34.363503   85759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.embed-certs-325116 san=[127.0.0.1 192.168.39.47 embed-certs-325116 localhost minikube]
	I1104 12:07:34.453223   85759 provision.go:177] copyRemoteCerts
	I1104 12:07:34.453295   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:34.453317   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.455736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456022   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.456054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456230   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.456406   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.456539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.456631   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.539172   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:34.561889   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:07:34.585111   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:07:34.607449   85759 provision.go:87] duration metric: took 250.195255ms to configureAuth
	I1104 12:07:34.607495   85759 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:34.607809   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:34.607952   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.610672   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611009   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.611032   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611253   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.611444   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611600   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611739   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.611917   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.612086   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.612101   85759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:34.823086   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:34.823114   85759 machine.go:96] duration metric: took 805.522353ms to provisionDockerMachine
	I1104 12:07:34.823128   85759 start.go:293] postStartSetup for "embed-certs-325116" (driver="kvm2")
	I1104 12:07:34.823138   85759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:34.823174   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.823451   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:34.823489   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.826112   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826453   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.826482   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826581   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.826756   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.826886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.826998   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.907354   85759 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:34.911229   85759 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:34.911246   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:34.911316   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:34.911402   85759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:34.911516   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:34.920149   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:34.942468   85759 start.go:296] duration metric: took 119.32654ms for postStartSetup
	I1104 12:07:34.942517   85759 fix.go:56] duration metric: took 17.200448721s for fixHost
	I1104 12:07:34.942540   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.945295   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945659   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.945685   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945847   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.946006   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946173   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946311   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.946442   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.946583   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.946592   85759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:35.049767   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722055.017047529
	
	I1104 12:07:35.049790   85759 fix.go:216] guest clock: 1730722055.017047529
	I1104 12:07:35.049797   85759 fix.go:229] Guest: 2024-11-04 12:07:35.017047529 +0000 UTC Remote: 2024-11-04 12:07:34.942522008 +0000 UTC m=+283.781167350 (delta=74.525521ms)
	I1104 12:07:35.049829   85759 fix.go:200] guest clock delta is within tolerance: 74.525521ms
	I1104 12:07:35.049834   85759 start.go:83] releasing machines lock for "embed-certs-325116", held for 17.307794416s
	I1104 12:07:35.049859   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.050137   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:35.052845   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053238   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.053269   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054239   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054337   85759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:35.054383   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.054502   85759 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:35.054539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.057289   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057391   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057733   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057778   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057802   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057820   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057959   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.057996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.058110   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058296   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058316   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.058658   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.134602   85759 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:35.158961   85759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:35.303038   85759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:35.309611   85759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:35.309674   85759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:35.325082   85759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:35.325142   85759 start.go:495] detecting cgroup driver to use...
	I1104 12:07:35.325211   85759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:35.341673   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:35.355506   85759 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:35.355569   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:35.369017   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:35.382745   85759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:35.498985   85759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:35.648628   85759 docker.go:233] disabling docker service ...
	I1104 12:07:35.648702   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:35.666912   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:35.679786   85759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:35.799284   85759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:35.931842   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:35.945707   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:35.965183   85759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:35.965269   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.975446   85759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:35.975514   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.985968   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.996462   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.006840   85759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:36.017174   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.027013   85759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.044572   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.054046   85759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:36.063355   85759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:36.063399   85759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:36.075157   85759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:36.084713   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:36.205088   85759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:36.299330   85759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:36.299423   85759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:36.304194   85759 start.go:563] Will wait 60s for crictl version
	I1104 12:07:36.304248   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:07:36.308041   85759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:36.349114   85759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:36.349264   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.378677   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.406751   85759 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:36.335603   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting to get IP...
	I1104 12:07:36.336431   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.336921   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.337007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.336911   87142 retry.go:31] will retry after 289.750795ms: waiting for machine to come up
	I1104 12:07:36.628712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629301   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.629345   87142 retry.go:31] will retry after 356.596321ms: waiting for machine to come up
	I1104 12:07:36.988173   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988663   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988713   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.988626   87142 retry.go:31] will retry after 446.62367ms: waiting for machine to come up
	I1104 12:07:37.437529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438120   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438174   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.438023   87142 retry.go:31] will retry after 482.072639ms: waiting for machine to come up
	I1104 12:07:37.921514   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922025   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922056   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.921983   87142 retry.go:31] will retry after 645.10615ms: waiting for machine to come up
	I1104 12:07:38.569009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569524   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569566   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:38.569432   87142 retry.go:31] will retry after 841.352802ms: waiting for machine to come up
	I1104 12:07:39.412662   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413091   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413112   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:39.413047   87142 retry.go:31] will retry after 878.218722ms: waiting for machine to come up
	I1104 12:07:36.407939   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:36.411021   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411378   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:36.411408   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411599   85759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:36.415528   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:36.427484   85759 kubeadm.go:883] updating cluster {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:36.427616   85759 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:36.427684   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:36.460332   85759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:36.460406   85759 ssh_runner.go:195] Run: which lz4
	I1104 12:07:36.464187   85759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:36.468140   85759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:36.468177   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:37.703067   85759 crio.go:462] duration metric: took 1.238901186s to copy over tarball
	I1104 12:07:37.703136   85759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:39.803761   85759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100578378s)
	I1104 12:07:39.803795   85759 crio.go:469] duration metric: took 2.100697698s to extract the tarball
	I1104 12:07:39.803805   85759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:39.840536   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:39.883410   85759 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:39.883431   85759 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:39.883438   85759 kubeadm.go:934] updating node { 192.168.39.47 8443 v1.31.2 crio true true} ...
	I1104 12:07:39.883531   85759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-325116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:39.883608   85759 ssh_runner.go:195] Run: crio config
	I1104 12:07:39.928280   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:39.928303   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:39.928313   85759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:39.928333   85759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-325116 NodeName:embed-certs-325116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:39.928440   85759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-325116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:39.928495   85759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:39.938496   85759 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:39.938568   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:39.947809   85759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1104 12:07:39.963319   85759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:39.978789   85759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1104 12:07:39.994910   85759 ssh_runner.go:195] Run: grep 192.168.39.47	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:39.998355   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:40.010097   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:40.118679   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:40.134369   85759 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116 for IP: 192.168.39.47
	I1104 12:07:40.134391   85759 certs.go:194] generating shared ca certs ...
	I1104 12:07:40.134429   85759 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:40.134612   85759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:40.134666   85759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:40.134680   85759 certs.go:256] generating profile certs ...
	I1104 12:07:40.134782   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/client.key
	I1104 12:07:40.134880   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key.36f6fb66
	I1104 12:07:40.134929   85759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key
	I1104 12:07:40.135083   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:40.135124   85759 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:40.135140   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:40.135225   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:40.135281   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:40.135315   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:40.135380   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:40.136240   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:40.179608   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:40.227851   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:40.255791   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:40.281672   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1104 12:07:40.305960   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:07:40.332465   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:40.354950   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:07:40.377476   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:40.399291   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:40.420689   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:40.443610   85759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:40.459706   85759 ssh_runner.go:195] Run: openssl version
	I1104 12:07:40.465244   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:40.475375   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479676   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479748   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.485523   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:40.497163   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:40.509090   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513617   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513685   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.519372   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:40.530944   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:40.542569   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.546965   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.547019   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.552470   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:40.562456   85759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:40.566967   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:40.572778   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:40.578409   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:40.584134   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:40.589880   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:40.595604   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:40.601191   85759 kubeadm.go:392] StartCluster: {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:40.601329   85759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:40.601385   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.642970   85759 cri.go:89] found id: ""
	I1104 12:07:40.643034   85759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:40.653420   85759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:40.653446   85759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:40.653496   85759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:40.663023   85759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:40.664008   85759 kubeconfig.go:125] found "embed-certs-325116" server: "https://192.168.39.47:8443"
	I1104 12:07:40.665967   85759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:40.675296   85759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.47
	I1104 12:07:40.675324   85759 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:40.675336   85759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:40.675384   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.718457   85759 cri.go:89] found id: ""
	I1104 12:07:40.718543   85759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:40.733875   85759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:40.743811   85759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:40.743835   85759 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:40.743889   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:07:40.752987   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:40.753048   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:40.762296   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:07:40.771048   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:40.771112   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:40.780163   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.789500   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:40.789566   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.799200   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:07:40.808061   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:40.808121   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:40.817445   85759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:40.826803   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.934345   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.292591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293050   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293084   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:40.292988   87142 retry.go:31] will retry after 1.110341741s: waiting for machine to come up
	I1104 12:07:41.405407   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405858   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405885   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:41.405800   87142 retry.go:31] will retry after 1.311587036s: waiting for machine to come up
	I1104 12:07:42.719157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:42.719530   87142 retry.go:31] will retry after 1.999866716s: waiting for machine to come up
	I1104 12:07:44.721872   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722324   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722351   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:44.722278   87142 retry.go:31] will retry after 2.895241769s: waiting for machine to come up
	I1104 12:07:41.512710   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.729355   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.807064   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.888493   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:07:41.888593   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.389674   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.889373   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.389705   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.889548   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.924248   85759 api_server.go:72] duration metric: took 2.035753888s to wait for apiserver process to appear ...
	I1104 12:07:43.924277   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:07:43.924320   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:43.924831   85759 api_server.go:269] stopped: https://192.168.39.47:8443/healthz: Get "https://192.168.39.47:8443/healthz": dial tcp 192.168.39.47:8443: connect: connection refused
	I1104 12:07:44.424651   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.043002   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.043037   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.043054   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.104246   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.104276   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.424506   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.430506   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.430544   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:47.924409   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.937055   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.937083   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:48.424568   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:48.428850   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:07:48.436388   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:07:48.436411   85759 api_server.go:131] duration metric: took 4.512127349s to wait for apiserver health ...
	I1104 12:07:48.436420   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:48.436427   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:48.438220   85759 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:07:48.439495   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:07:48.449650   85759 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:07:48.467313   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:07:48.480777   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:07:48.480823   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:07:48.480834   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:07:48.480845   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:07:48.480859   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:07:48.480876   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:07:48.480893   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:07:48.480907   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:07:48.480916   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:07:48.480928   85759 system_pods.go:74] duration metric: took 13.592864ms to wait for pod list to return data ...
	I1104 12:07:48.480947   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:07:48.487234   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:07:48.487271   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:07:48.487284   85759 node_conditions.go:105] duration metric: took 6.331259ms to run NodePressure ...
	I1104 12:07:48.487313   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:48.756654   85759 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764840   85759 kubeadm.go:739] kubelet initialised
	I1104 12:07:48.764863   85759 kubeadm.go:740] duration metric: took 8.175857ms waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764871   85759 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:48.772653   85759 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.784158   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784198   85759 pod_ready.go:82] duration metric: took 11.515605ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.784211   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784220   85759 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.791264   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791297   85759 pod_ready.go:82] duration metric: took 7.066247ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.791310   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791326   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.798259   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798294   85759 pod_ready.go:82] duration metric: took 6.954559ms for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.798304   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798312   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.872019   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872058   85759 pod_ready.go:82] duration metric: took 73.723761ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.872069   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872075   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.271210   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271252   85759 pod_ready.go:82] duration metric: took 399.167509ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.271264   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271272   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.671430   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671453   85759 pod_ready.go:82] duration metric: took 400.174495ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.671469   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671475   85759 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:50.070546   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070576   85759 pod_ready.go:82] duration metric: took 399.092108ms for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:50.070587   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070596   85759 pod_ready.go:39] duration metric: took 1.305717298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:50.070615   85759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:07:50.082815   85759 ops.go:34] apiserver oom_adj: -16
	I1104 12:07:50.082839   85759 kubeadm.go:597] duration metric: took 9.429385589s to restartPrimaryControlPlane
	I1104 12:07:50.082850   85759 kubeadm.go:394] duration metric: took 9.481667011s to StartCluster
	I1104 12:07:50.082871   85759 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.082952   85759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:07:50.086014   85759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.086562   85759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:07:50.086628   85759 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:07:50.086740   85759 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-325116"
	I1104 12:07:50.086763   85759 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-325116"
	I1104 12:07:50.086765   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1104 12:07:50.086776   85759 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:07:50.086774   85759 addons.go:69] Setting default-storageclass=true in profile "embed-certs-325116"
	I1104 12:07:50.086803   85759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-325116"
	I1104 12:07:50.086787   85759 addons.go:69] Setting metrics-server=true in profile "embed-certs-325116"
	I1104 12:07:50.086812   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.086825   85759 addons.go:234] Setting addon metrics-server=true in "embed-certs-325116"
	W1104 12:07:50.086837   85759 addons.go:243] addon metrics-server should already be in state true
	I1104 12:07:50.086866   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.087120   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087148   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087160   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087178   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087247   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087286   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.088320   85759 out.go:177] * Verifying Kubernetes components...
	I1104 12:07:50.089814   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:50.102796   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I1104 12:07:50.102976   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1104 12:07:50.103076   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1104 12:07:50.103462   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103491   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103566   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103990   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104014   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104085   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104101   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104199   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104223   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104368   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104402   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104545   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.104559   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104949   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.104987   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.105081   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.105116   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.108134   85759 addons.go:234] Setting addon default-storageclass=true in "embed-certs-325116"
	W1104 12:07:50.108161   85759 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:07:50.108193   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.108597   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.108648   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.121556   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I1104 12:07:50.122038   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.122504   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.122527   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.122869   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.123107   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.125142   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.125294   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I1104 12:07:50.125613   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.125972   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.125988   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.126279   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.126399   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.127256   85759 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:07:50.127993   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1104 12:07:50.128235   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.128597   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.128843   85759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.128864   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:07:50.128883   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.129066   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.129088   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.129389   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.129882   85759 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:07:47.619516   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620045   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:47.620000   87142 retry.go:31] will retry after 3.554669963s: waiting for machine to come up
	I1104 12:07:50.130127   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.130187   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.131115   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:07:50.131134   85759 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:07:50.131154   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.131899   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132352   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.132375   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132664   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.132830   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.132986   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.133099   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.134698   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135217   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.135246   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.135629   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.135765   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.135908   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.146618   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1104 12:07:50.147639   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.148281   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.148307   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.148617   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.148860   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.150751   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.151010   85759 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.151028   85759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:07:50.151050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.153947   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154385   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.154418   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154560   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.154749   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.154886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.155028   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.278380   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:50.294682   85759 node_ready.go:35] waiting up to 6m0s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:50.355769   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:07:50.355790   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:07:50.375818   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.404741   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:07:50.404766   85759 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:07:50.466718   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.466748   85759 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:07:50.493662   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.503255   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.799735   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.799772   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800039   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800086   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.800094   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:50.800107   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.800159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800382   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800394   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.810586   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.810857   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.810876   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810893   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.484326   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484356   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484671   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484687   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484695   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484702   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484899   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484938   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484950   85759 addons.go:475] Verifying addon metrics-server=true in "embed-certs-325116"
	I1104 12:07:51.549507   85759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.046214827s)
	I1104 12:07:51.549559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549569   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.549886   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.549906   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.549909   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.549916   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549923   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.550143   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.550164   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.552039   85759 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1104 12:07:52.573915   86402 start.go:364] duration metric: took 3m30.781955626s to acquireMachinesLock for "old-k8s-version-589257"
	I1104 12:07:52.573984   86402 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:52.573996   86402 fix.go:54] fixHost starting: 
	I1104 12:07:52.574443   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:52.574500   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:52.594310   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1104 12:07:52.594822   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:52.595317   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:07:52.595347   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:52.595727   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:52.595924   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:07:52.596093   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 12:07:52.597578   86402 fix.go:112] recreateIfNeeded on old-k8s-version-589257: state=Stopped err=<nil>
	I1104 12:07:52.597615   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	W1104 12:07:52.597752   86402 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:52.599659   86402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	I1104 12:07:51.176791   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177282   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Found IP for machine: 192.168.72.130
	I1104 12:07:51.177313   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has current primary IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177325   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserving static IP address...
	I1104 12:07:51.177817   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.177863   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | skip adding static IP to network mk-default-k8s-diff-port-036892 - found existing host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"}
	I1104 12:07:51.177876   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserved static IP address: 192.168.72.130
	I1104 12:07:51.177890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for SSH to be available...
	I1104 12:07:51.177897   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Getting to WaitForSSH function...
	I1104 12:07:51.180038   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180440   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.180466   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH client type: external
	I1104 12:07:51.180611   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa (-rw-------)
	I1104 12:07:51.180747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:51.180777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | About to run SSH command:
	I1104 12:07:51.180795   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | exit 0
	I1104 12:07:51.309075   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:51.309445   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetConfigRaw
	I1104 12:07:51.310162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.312651   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313061   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.313090   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313460   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:07:51.313720   86301 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:51.313747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:51.313926   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.316269   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316782   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.316829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316937   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.317162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317335   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317598   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.317777   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.317981   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.317994   86301 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:51.441588   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:51.441626   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.441876   86301 buildroot.go:166] provisioning hostname "default-k8s-diff-port-036892"
	I1104 12:07:51.441902   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.442097   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.445155   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445637   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.445670   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445820   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.446013   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446186   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446352   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.446539   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.446753   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.446773   86301 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-036892 && echo "default-k8s-diff-port-036892" | sudo tee /etc/hostname
	I1104 12:07:51.578973   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-036892
	
	I1104 12:07:51.579004   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.581759   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.582135   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582299   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.582455   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582582   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.582834   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.583006   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.583022   86301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-036892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-036892/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-036892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:51.702410   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:51.702441   86301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:51.702471   86301 buildroot.go:174] setting up certificates
	I1104 12:07:51.702483   86301 provision.go:84] configureAuth start
	I1104 12:07:51.702492   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.702789   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.705067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.705449   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705567   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.707341   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707627   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.707658   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707748   86301 provision.go:143] copyHostCerts
	I1104 12:07:51.707805   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:51.707818   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:51.707870   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:51.707969   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:51.707978   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:51.707999   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:51.708061   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:51.708067   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:51.708085   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:51.708132   86301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-036892 san=[127.0.0.1 192.168.72.130 default-k8s-diff-port-036892 localhost minikube]
	I1104 12:07:51.935898   86301 provision.go:177] copyRemoteCerts
	I1104 12:07:51.935973   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:51.936008   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.938722   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939100   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.939134   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939266   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.939462   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.939609   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.939786   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.027147   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:52.054828   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1104 12:07:52.078755   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 12:07:52.101312   86301 provision.go:87] duration metric: took 398.817409ms to configureAuth
	I1104 12:07:52.101338   86301 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:52.101523   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:52.101608   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.104187   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104549   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.104581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104700   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.104890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105028   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.105319   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.105490   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.105514   86301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:52.331840   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:52.331865   86301 machine.go:96] duration metric: took 1.018128337s to provisionDockerMachine
	I1104 12:07:52.331875   86301 start.go:293] postStartSetup for "default-k8s-diff-port-036892" (driver="kvm2")
	I1104 12:07:52.331884   86301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:52.331898   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.332229   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:52.332261   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.334710   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335005   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.335036   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335176   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.335342   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.335447   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.335547   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.419392   86301 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:52.423306   86301 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:52.423335   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:52.423396   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:52.423483   86301 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:52.423575   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:52.432625   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:52.456616   86301 start.go:296] duration metric: took 124.726284ms for postStartSetup
	I1104 12:07:52.456664   86301 fix.go:56] duration metric: took 17.406645021s for fixHost
	I1104 12:07:52.456689   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.459189   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.459573   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.459967   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460093   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460218   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.460349   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.460521   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.460533   86301 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:52.573760   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722072.546172571
	
	I1104 12:07:52.573781   86301 fix.go:216] guest clock: 1730722072.546172571
	I1104 12:07:52.573787   86301 fix.go:229] Guest: 2024-11-04 12:07:52.546172571 +0000 UTC Remote: 2024-11-04 12:07:52.45666981 +0000 UTC m=+212.207079326 (delta=89.502761ms)
	I1104 12:07:52.573827   86301 fix.go:200] guest clock delta is within tolerance: 89.502761ms
	I1104 12:07:52.573832   86301 start.go:83] releasing machines lock for "default-k8s-diff-port-036892", held for 17.523849814s
	I1104 12:07:52.573856   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.574109   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:52.576773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577125   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.577151   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577304   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577776   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577950   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.578043   86301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:52.578079   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.578133   86301 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:52.578159   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.580773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.580909   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581154   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581179   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581196   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581286   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581488   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581660   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581677   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581770   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.581823   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581946   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.683801   86301 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:52.689498   86301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:52.830236   86301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:52.835868   86301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:52.835951   86301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:52.851557   86301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:52.851586   86301 start.go:495] detecting cgroup driver to use...
	I1104 12:07:52.851656   86301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:52.868648   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:52.883434   86301 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:52.883507   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:52.898233   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:52.912615   86301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:53.036342   86301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:53.183326   86301 docker.go:233] disabling docker service ...
	I1104 12:07:53.183407   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:53.197465   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:53.210118   86301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:53.354857   86301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:53.490760   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:53.506829   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:53.526401   86301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:53.526464   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.537264   86301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:53.537339   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.547882   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.558039   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.569347   86301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:53.579931   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.589594   86301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.606753   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.623316   86301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:53.638183   86301 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:53.638245   86301 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:53.656452   86301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:53.666343   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:53.784882   86301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:53.879727   86301 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:53.879790   86301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:53.884438   86301 start.go:563] Will wait 60s for crictl version
	I1104 12:07:53.884494   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:07:53.887785   86301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:53.926395   86301 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:53.926496   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.963049   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.996513   86301 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:53.997774   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:54.000829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001214   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:54.001300   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001469   86301 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:54.005521   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:54.021723   86301 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:54.021915   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:54.021979   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:54.072114   86301 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:54.072178   86301 ssh_runner.go:195] Run: which lz4
	I1104 12:07:54.077106   86301 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:54.081979   86301 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:54.082018   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:51.553141   85759 addons.go:510] duration metric: took 1.466523338s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1104 12:07:52.298494   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:54.299895   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:52.600997   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .Start
	I1104 12:07:52.601180   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 12:07:52.602131   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 12:07:52.602560   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 12:07:52.603030   86402 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 12:07:52.603859   86402 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 12:07:53.855214   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 12:07:53.856063   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:53.856539   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:53.856594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:53.856513   87367 retry.go:31] will retry after 268.725451ms: waiting for machine to come up
	I1104 12:07:54.127094   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.127584   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.127612   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.127560   87367 retry.go:31] will retry after 239.665225ms: waiting for machine to come up
	I1104 12:07:54.369139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.369777   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.369798   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.369710   87367 retry.go:31] will retry after 386.228261ms: waiting for machine to come up
	I1104 12:07:54.757191   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.757637   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.757665   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.757591   87367 retry.go:31] will retry after 571.244573ms: waiting for machine to come up
	I1104 12:07:55.330439   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.331187   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.331216   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.331144   87367 retry.go:31] will retry after 539.328185ms: waiting for machine to come up
	I1104 12:07:55.871869   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.872373   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.872403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.872335   87367 retry.go:31] will retry after 879.285089ms: waiting for machine to come up
	I1104 12:07:55.376802   86301 crio.go:462] duration metric: took 1.299729399s to copy over tarball
	I1104 12:07:55.376881   86301 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:57.716230   86301 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.339307666s)
	I1104 12:07:57.716268   86301 crio.go:469] duration metric: took 2.339436958s to extract the tarball
	I1104 12:07:57.716277   86301 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:57.753216   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:57.799042   86301 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:57.799145   86301 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:57.799161   86301 kubeadm.go:934] updating node { 192.168.72.130 8444 v1.31.2 crio true true} ...
	I1104 12:07:57.799273   86301 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-036892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:57.799347   86301 ssh_runner.go:195] Run: crio config
	I1104 12:07:57.851871   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:07:57.851892   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:57.851900   86301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:57.851919   86301 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-036892 NodeName:default-k8s-diff-port-036892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:57.852056   86301 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-036892"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:57.852116   86301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:57.862269   86301 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:57.862343   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:57.872253   86301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1104 12:07:57.889328   86301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:57.908250   86301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1104 12:07:57.926081   86301 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:57.929870   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:57.943872   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:58.070141   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:58.089370   86301 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892 for IP: 192.168.72.130
	I1104 12:07:58.089397   86301 certs.go:194] generating shared ca certs ...
	I1104 12:07:58.089423   86301 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:58.089596   86301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:58.089647   86301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:58.089659   86301 certs.go:256] generating profile certs ...
	I1104 12:07:58.089765   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/client.key
	I1104 12:07:58.089831   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key.713851b2
	I1104 12:07:58.089889   86301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key
	I1104 12:07:58.090054   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:58.090100   86301 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:58.090116   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:58.090149   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:58.090184   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:58.090219   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:58.090279   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:58.090977   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:58.125282   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:58.168289   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:58.210967   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:58.253986   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 12:07:58.280769   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:07:58.308406   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:58.334250   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:07:58.363224   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:58.391795   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:58.420782   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:58.446611   86301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:58.465895   86301 ssh_runner.go:195] Run: openssl version
	I1104 12:07:58.471614   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:58.482139   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486533   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486591   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.492217   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:58.502724   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:58.514146   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518243   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518303   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.523579   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:58.533993   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:58.544137   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548190   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548250   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.553714   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:58.564221   86301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:58.568445   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:58.574072   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:58.579551   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:58.584909   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:58.590102   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:58.595227   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:58.600338   86301 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:58.600445   86301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:58.600492   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.634282   86301 cri.go:89] found id: ""
	I1104 12:07:58.634352   86301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:58.644578   86301 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:58.644597   86301 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:58.644635   86301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:58.654412   86301 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:58.655638   86301 kubeconfig.go:125] found "default-k8s-diff-port-036892" server: "https://192.168.72.130:8444"
	I1104 12:07:58.658639   86301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:58.667867   86301 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I1104 12:07:58.667900   86301 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:58.667913   86301 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:58.667971   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.702765   86301 cri.go:89] found id: ""
	I1104 12:07:58.702844   86301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:58.718368   86301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:58.727671   86301 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:58.727690   86301 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:58.727750   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1104 12:07:58.736350   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:58.736424   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:58.745441   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1104 12:07:58.753945   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:58.754011   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:58.763134   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.771588   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:58.771651   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.780623   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1104 12:07:58.788962   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:58.789036   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:58.798472   86301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:58.808209   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:58.919153   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.679355   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.889628   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.958981   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:00.048061   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:00.048158   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:56.798747   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:57.799286   85759 node_ready.go:49] node "embed-certs-325116" has status "Ready":"True"
	I1104 12:07:57.799308   85759 node_ready.go:38] duration metric: took 7.504592975s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:57.799319   85759 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:57.805595   85759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812394   85759 pod_ready.go:93] pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.812421   85759 pod_ready.go:82] duration metric: took 6.791823ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812434   85759 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818338   85759 pod_ready.go:93] pod "etcd-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.818359   85759 pod_ready.go:82] duration metric: took 5.916571ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818400   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:00.015222   85759 pod_ready.go:103] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"False"
	I1104 12:07:56.752983   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:56.753577   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:56.753613   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:56.753542   87367 retry.go:31] will retry after 1.081359862s: waiting for machine to come up
	I1104 12:07:57.836518   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:57.836963   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:57.836990   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:57.836914   87367 retry.go:31] will retry after 1.149571097s: waiting for machine to come up
	I1104 12:07:58.987694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:58.988125   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:58.988152   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:58.988077   87367 retry.go:31] will retry after 1.247311806s: waiting for machine to come up
	I1104 12:08:00.237634   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:00.238147   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:00.238217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:00.238109   87367 retry.go:31] will retry after 2.058125339s: waiting for machine to come up
	I1104 12:08:00.549003   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.048325   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.548502   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.563976   86301 api_server.go:72] duration metric: took 1.515915725s to wait for apiserver process to appear ...
	I1104 12:08:01.564003   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:01.564021   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.008662   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.008689   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.008701   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.033053   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.033085   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.064261   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.084034   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.084062   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:04.564564   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.570062   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.570090   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.064688   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.069572   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:05.069600   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.564628   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.570537   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:08:05.577335   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:05.577360   86301 api_server.go:131] duration metric: took 4.01335048s to wait for apiserver health ...
	I1104 12:08:05.577371   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:08:05.577379   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:05.578990   86301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:01.824677   85759 pod_ready.go:93] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.824703   85759 pod_ready.go:82] duration metric: took 4.006292816s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.824717   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833386   85759 pod_ready.go:93] pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.833415   85759 pod_ready.go:82] duration metric: took 8.688522ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833428   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839346   85759 pod_ready.go:93] pod "kube-proxy-phzgx" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.839370   85759 pod_ready.go:82] duration metric: took 5.933971ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839379   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844449   85759 pod_ready.go:93] pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.844476   85759 pod_ready.go:82] duration metric: took 5.08969ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844490   85759 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:03.852871   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:02.298631   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:02.299046   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:02.299079   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:02.298978   87367 retry.go:31] will retry after 2.664667046s: waiting for machine to come up
	I1104 12:08:04.964700   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:04.965185   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:04.965209   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:04.965135   87367 retry.go:31] will retry after 2.716802395s: waiting for machine to come up
	I1104 12:08:05.580188   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:05.591930   86301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:05.609969   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:05.621524   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:05.621559   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:05.621579   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:05.621590   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:05.621599   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:05.621609   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:05.621623   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:05.621637   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:05.621646   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:05.621656   86301 system_pods.go:74] duration metric: took 11.668493ms to wait for pod list to return data ...
	I1104 12:08:05.621669   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:05.626555   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:05.626583   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:05.626600   86301 node_conditions.go:105] duration metric: took 4.924748ms to run NodePressure ...
	I1104 12:08:05.626620   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:05.899159   86301 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905004   86301 kubeadm.go:739] kubelet initialised
	I1104 12:08:05.905027   86301 kubeadm.go:740] duration metric: took 5.831926ms waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905035   86301 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:05.910301   86301 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.917517   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917552   86301 pod_ready.go:82] duration metric: took 7.223252ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.917564   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917577   86301 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.924077   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924108   86301 pod_ready.go:82] duration metric: took 6.519268ms for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.924123   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924133   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.929584   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929611   86301 pod_ready.go:82] duration metric: took 5.464108ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.929625   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929640   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.013629   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013655   86301 pod_ready.go:82] duration metric: took 84.003349ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.013666   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013674   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.413337   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413362   86301 pod_ready.go:82] duration metric: took 399.676932ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.413372   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413379   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.813910   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813948   86301 pod_ready.go:82] duration metric: took 400.558541ms for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.813962   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813971   86301 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:07.213603   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213632   86301 pod_ready.go:82] duration metric: took 399.645898ms for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:07.213642   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213650   86301 pod_ready.go:39] duration metric: took 1.308606058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:07.213664   86301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:07.224946   86301 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:07.224972   86301 kubeadm.go:597] duration metric: took 8.580368331s to restartPrimaryControlPlane
	I1104 12:08:07.224984   86301 kubeadm.go:394] duration metric: took 8.624649305s to StartCluster
	I1104 12:08:07.225005   86301 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.225093   86301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:07.226601   86301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.226848   86301 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:07.226980   86301 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:07.227075   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:07.227096   86301 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227115   86301 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:07.227110   86301 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-036892"
	W1104 12:08:07.227128   86301 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:07.227145   86301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-036892"
	I1104 12:08:07.227161   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227082   86301 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227275   86301 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.227291   86301 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:07.227316   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227494   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227529   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227592   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227620   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227634   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227655   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.228583   86301 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:07.229927   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:07.242580   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I1104 12:08:07.243096   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.243659   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.243678   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.243954   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I1104 12:08:07.244058   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.244513   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.244634   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.244679   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245015   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.245035   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.245437   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.245905   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.245942   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245963   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I1104 12:08:07.246281   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.246725   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.246748   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.247084   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.247294   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.250833   86301 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.250857   86301 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:07.250884   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.251243   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.251285   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.261670   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1104 12:08:07.261736   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I1104 12:08:07.262154   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262283   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262803   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262821   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.262916   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262927   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.263218   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263282   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263411   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.263457   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.265067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.265574   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.267307   86301 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:07.267336   86301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:07.268853   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:07.268874   86301 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:07.268895   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.268976   86301 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.268994   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:07.269011   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.271584   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I1104 12:08:07.272047   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.272347   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272377   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272688   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.272707   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.272933   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.272959   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272990   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.273007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.273065   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.273149   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273564   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.273597   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.273765   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273767   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273925   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273966   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274049   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274098   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.274179   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.288474   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I1104 12:08:07.288955   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.289555   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.289580   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.289915   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.290128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.291744   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.291944   86301 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.291958   86301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:07.291972   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.294477   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.294793   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.294824   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.295009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.295178   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.295326   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.295444   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.430295   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:07.461396   86301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:07.523117   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.542339   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:07.542361   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:07.566207   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:07.566232   86301 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:07.580871   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.596309   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:07.596338   86301 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:07.626662   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:08.553268   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553295   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553315   86301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030165078s)
	I1104 12:08:08.553352   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553373   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553656   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553673   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553683   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553739   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553759   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553767   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553780   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553925   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553942   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.554106   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.554138   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.554155   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.559615   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.559635   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.559944   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.559961   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.563833   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.563848   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564636   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564653   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564666   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.564671   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564894   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564906   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564912   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564940   86301 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:08.566838   86301 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:08.568165   86301 addons.go:510] duration metric: took 1.341200959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:09.465405   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.350759   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:08.850563   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:10.851315   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:07.683582   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:07.684143   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:07.684172   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:07.684093   87367 retry.go:31] will retry after 2.880856513s: waiting for machine to come up
	I1104 12:08:10.566197   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.566657   86402 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 12:08:10.566675   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 12:08:10.566687   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.567139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.567166   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 12:08:10.567186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | skip adding static IP to network mk-old-k8s-version-589257 - found existing host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"}
	I1104 12:08:10.567199   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 12:08:10.567213   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 12:08:10.569500   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569816   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.569846   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 12:08:10.570004   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 12:08:10.570025   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:10.570033   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 12:08:10.570041   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 12:08:10.697114   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:10.697552   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 12:08:10.698196   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:10.700982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701369   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.701403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701649   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:08:10.701875   86402 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:10.701898   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:10.702099   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.704605   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.704977   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.705006   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.705151   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.705342   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705486   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705602   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.705703   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.705907   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.705918   86402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:10.813494   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:10.813544   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.813816   86402 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 12:08:10.813847   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.814034   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.816782   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.817245   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817394   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.817589   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817760   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817882   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.818027   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.818227   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.818245   86402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 12:08:10.940779   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 12:08:10.940803   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.943694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944062   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.944090   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944263   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.944452   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944627   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944767   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.944910   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.945093   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.945110   86402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:11.061924   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:11.061966   86402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:11.062007   86402 buildroot.go:174] setting up certificates
	I1104 12:08:11.062021   86402 provision.go:84] configureAuth start
	I1104 12:08:11.062033   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:11.062293   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.065165   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065559   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.065594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065834   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.068257   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068620   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.068646   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068787   86402 provision.go:143] copyHostCerts
	I1104 12:08:11.068842   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:11.068854   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:11.068904   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:11.068993   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:11.069000   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:11.069019   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:11.069072   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:11.069079   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:11.069097   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:11.069191   86402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 12:08:11.271880   86402 provision.go:177] copyRemoteCerts
	I1104 12:08:11.271946   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:11.271988   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.275023   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275396   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.275428   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275701   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.275905   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.276048   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.276182   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.362968   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:11.388401   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 12:08:11.417180   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:11.439810   86402 provision.go:87] duration metric: took 377.778325ms to configureAuth
	I1104 12:08:11.439841   86402 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:11.440043   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:08:11.440110   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.442476   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.442783   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.442818   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.443005   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.443204   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443329   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.443665   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.443822   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.443837   86402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:11.662212   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:11.662241   86402 machine.go:96] duration metric: took 960.351823ms to provisionDockerMachine
	I1104 12:08:11.662256   86402 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 12:08:11.662269   86402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:11.662289   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.662613   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:11.662642   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.665028   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665391   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.665420   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665598   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.665776   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.665942   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.666064   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.889727   85500 start.go:364] duration metric: took 49.147423989s to acquireMachinesLock for "no-preload-908370"
	I1104 12:08:11.889796   85500 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:08:11.889806   85500 fix.go:54] fixHost starting: 
	I1104 12:08:11.890201   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:11.890229   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:11.906978   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I1104 12:08:11.907524   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:11.907916   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:11.907939   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:11.908319   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:11.908518   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:11.908672   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:11.910182   85500 fix.go:112] recreateIfNeeded on no-preload-908370: state=Stopped err=<nil>
	I1104 12:08:11.910224   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	W1104 12:08:11.910353   85500 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:08:11.912457   85500 out.go:177] * Restarting existing kvm2 VM for "no-preload-908370" ...
	I1104 12:08:11.747199   86402 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:11.751253   86402 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:11.751279   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:11.751356   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:11.751465   86402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:11.751591   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:11.760409   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:11.781890   86402 start.go:296] duration metric: took 119.620604ms for postStartSetup
	I1104 12:08:11.781934   86402 fix.go:56] duration metric: took 19.207938878s for fixHost
	I1104 12:08:11.781960   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.784767   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785058   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.785084   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785300   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.785500   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785644   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785750   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.785877   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.786047   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.786059   86402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:11.889540   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722091.863405264
	
	I1104 12:08:11.889568   86402 fix.go:216] guest clock: 1730722091.863405264
	I1104 12:08:11.889578   86402 fix.go:229] Guest: 2024-11-04 12:08:11.863405264 +0000 UTC Remote: 2024-11-04 12:08:11.781939603 +0000 UTC m=+230.132769870 (delta=81.465661ms)
	I1104 12:08:11.889631   86402 fix.go:200] guest clock delta is within tolerance: 81.465661ms
	I1104 12:08:11.889641   86402 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 19.315682928s
	I1104 12:08:11.889677   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.889975   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.892654   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.892982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.893012   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.893212   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893706   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893888   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893989   86402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:11.894031   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.894074   86402 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:11.894094   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.896812   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897020   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897192   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897454   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897478   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897631   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897646   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897778   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897911   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.897989   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.898083   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.898120   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.998704   86402 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:12.004820   86402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:12.148742   86402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:12.155015   86402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:12.155089   86402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:12.171054   86402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:12.171085   86402 start.go:495] detecting cgroup driver to use...
	I1104 12:08:12.171154   86402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:12.189977   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:12.204622   86402 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:12.204679   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:12.218808   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:12.232276   86402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:12.341220   86402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:12.512813   86402 docker.go:233] disabling docker service ...
	I1104 12:08:12.512893   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:12.526784   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:12.539774   86402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:12.666162   86402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:12.788317   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:12.802703   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:12.820915   86402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 12:08:12.820985   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.831311   86402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:12.831400   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.841625   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.852548   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.864683   86402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:12.876794   86402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:12.886878   86402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:12.886943   86402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:12.902476   86402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:12.914565   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:13.044125   86402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:13.149816   86402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:13.149893   86402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:13.154639   86402 start.go:563] Will wait 60s for crictl version
	I1104 12:08:13.154706   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:13.158788   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:13.200038   86402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:13.200117   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.233501   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.264558   86402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 12:08:11.913730   85500 main.go:141] libmachine: (no-preload-908370) Calling .Start
	I1104 12:08:11.913915   85500 main.go:141] libmachine: (no-preload-908370) Ensuring networks are active...
	I1104 12:08:11.914653   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network default is active
	I1104 12:08:11.915111   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network mk-no-preload-908370 is active
	I1104 12:08:11.915575   85500 main.go:141] libmachine: (no-preload-908370) Getting domain xml...
	I1104 12:08:11.916375   85500 main.go:141] libmachine: (no-preload-908370) Creating domain...
	I1104 12:08:13.289793   85500 main.go:141] libmachine: (no-preload-908370) Waiting to get IP...
	I1104 12:08:13.290880   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.291498   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.291631   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.291463   87562 retry.go:31] will retry after 277.090671ms: waiting for machine to come up
	I1104 12:08:13.570141   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.570726   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.570749   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.570623   87562 retry.go:31] will retry after 259.985785ms: waiting for machine to come up
	I1104 12:08:13.832172   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.832855   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.832898   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.832809   87562 retry.go:31] will retry after 473.426945ms: waiting for machine to come up
	I1104 12:08:14.308725   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.309273   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.309302   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.309249   87562 retry.go:31] will retry after 417.466134ms: waiting for machine to come up
	I1104 12:08:14.727927   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.728388   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.728413   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.728366   87562 retry.go:31] will retry after 734.894622ms: waiting for machine to come up
	I1104 12:08:11.465894   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:13.966921   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:14.465523   86301 node_ready.go:49] node "default-k8s-diff-port-036892" has status "Ready":"True"
	I1104 12:08:14.465545   86301 node_ready.go:38] duration metric: took 7.004111382s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:14.465554   86301 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:14.473334   86301 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482486   86301 pod_ready.go:93] pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:14.482508   86301 pod_ready.go:82] duration metric: took 9.145998ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482518   86301 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:13.351753   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:15.851818   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:13.266087   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:13.269660   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270200   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:13.270233   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270520   86402 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:13.274751   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:13.290348   86402 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:13.290483   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:08:13.290547   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:13.340338   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:13.340426   86402 ssh_runner.go:195] Run: which lz4
	I1104 12:08:13.345147   86402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:08:13.349792   86402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:08:13.349872   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 12:08:14.842720   86402 crio.go:462] duration metric: took 1.497615031s to copy over tarball
	I1104 12:08:14.842791   86402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:08:15.464914   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:15.465510   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:15.465541   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:15.465478   87562 retry.go:31] will retry after 578.01955ms: waiting for machine to come up
	I1104 12:08:16.044861   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:16.045354   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:16.045380   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:16.045313   87562 retry.go:31] will retry after 1.136035438s: waiting for machine to come up
	I1104 12:08:17.182829   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:17.183255   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:17.183282   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:17.183233   87562 retry.go:31] will retry after 1.070971462s: waiting for machine to come up
	I1104 12:08:18.255532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:18.256051   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:18.256078   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:18.256007   87562 retry.go:31] will retry after 1.542250267s: waiting for machine to come up
	I1104 12:08:19.800851   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:19.801298   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:19.801324   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:19.801276   87562 retry.go:31] will retry after 2.127250885s: waiting for machine to come up
	I1104 12:08:16.489394   86301 pod_ready.go:103] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:16.994480   86301 pod_ready.go:93] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:16.994502   86301 pod_ready.go:82] duration metric: took 2.511977586s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:16.994512   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502472   86301 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.502499   86301 pod_ready.go:82] duration metric: took 507.979218ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502513   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507763   86301 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.507785   86301 pod_ready.go:82] duration metric: took 5.264185ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507795   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514017   86301 pod_ready.go:93] pod "kube-proxy-j2srm" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.514045   86301 pod_ready.go:82] duration metric: took 6.241799ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514058   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:19.683083   86301 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.049735   86301 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:20.049759   86301 pod_ready.go:82] duration metric: took 2.535691306s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:20.049772   86301 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:18.749494   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.853448   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:17.837381   86402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994557811s)
	I1104 12:08:17.837410   86402 crio.go:469] duration metric: took 2.994665886s to extract the tarball
	I1104 12:08:17.837420   86402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:08:17.882418   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:17.917035   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:17.917064   86402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:17.917195   86402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.917169   86402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.917164   86402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.917150   86402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.917283   86402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.917254   86402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.918943   86402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 12:08:17.919014   86402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.919025   86402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.070119   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.076604   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.078712   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.083777   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.087827   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.092838   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.110359   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 12:08:18.165523   86402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 12:08:18.165569   86402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.165617   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.213723   86402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 12:08:18.213784   86402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.213833   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.252171   86402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 12:08:18.252221   86402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.252270   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256482   86402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 12:08:18.256522   86402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.256567   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256606   86402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 12:08:18.256564   86402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 12:08:18.256631   86402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.256632   86402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.256632   86402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 12:08:18.256690   86402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 12:08:18.256657   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256703   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.256691   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.256738   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256658   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.264837   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.265836   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.349896   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.349935   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.350014   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.350077   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.368533   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.371302   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.371393   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.496042   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.496121   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.509196   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.509339   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.509247   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.509348   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.513943   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.645867   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 12:08:18.649173   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.649276   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.656159   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.656193   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 12:08:18.660309   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 12:08:18.660384   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 12:08:18.719995   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 12:08:18.720033   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 12:08:18.728304   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 12:08:18.867880   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:19.009342   86402 cache_images.go:92] duration metric: took 1.092257593s to LoadCachedImages
	W1104 12:08:19.009448   86402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1104 12:08:19.009469   86402 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 12:08:19.009590   86402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:19.009671   86402 ssh_runner.go:195] Run: crio config
	I1104 12:08:19.054831   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:08:19.054850   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:19.054863   86402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:19.054880   86402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 12:08:19.055049   86402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:19.055125   86402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 12:08:19.065804   86402 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:19.065888   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:19.075491   86402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 12:08:19.092371   86402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:19.108896   86402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 12:08:19.127622   86402 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:19.131597   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:19.145142   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:19.284780   86402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:19.303843   86402 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 12:08:19.303872   86402 certs.go:194] generating shared ca certs ...
	I1104 12:08:19.303894   86402 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.304084   86402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:19.304148   86402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:19.304161   86402 certs.go:256] generating profile certs ...
	I1104 12:08:19.304280   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 12:08:19.304347   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 12:08:19.304401   86402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 12:08:19.304549   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:19.304590   86402 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:19.304608   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:19.304659   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:19.304702   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:19.304729   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:19.304794   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:19.305479   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:19.341333   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:19.375179   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:19.410128   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:19.452565   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 12:08:19.493404   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:08:19.521178   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:19.550524   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:08:19.574903   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:19.599308   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:19.627107   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:19.657121   86402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:19.679087   86402 ssh_runner.go:195] Run: openssl version
	I1104 12:08:19.687115   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:19.702537   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707340   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707408   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.714955   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:19.727883   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:19.739690   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744600   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744656   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.750324   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:19.760988   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:19.772634   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777504   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777580   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.783660   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:19.795483   86402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:19.800327   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:19.806346   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:19.813920   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:19.820358   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:19.826359   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:19.832467   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:19.838902   86402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:19.839018   86402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:19.839075   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.880407   86402 cri.go:89] found id: ""
	I1104 12:08:19.880486   86402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:19.891135   86402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:19.891156   86402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:19.891219   86402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:19.901437   86402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:19.902325   86402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:19.902941   86402 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-589257" cluster setting kubeconfig missing "old-k8s-version-589257" context setting]
	I1104 12:08:19.903879   86402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.937877   86402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:19.948669   86402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.180
	I1104 12:08:19.948701   86402 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:19.948711   86402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:19.948752   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.988249   86402 cri.go:89] found id: ""
	I1104 12:08:19.988344   86402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:20.006949   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:20.020677   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:20.020700   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:20.020747   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:20.031509   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:20.031566   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:20.042229   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:20.054695   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:20.054810   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:20.067410   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.078639   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:20.078711   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.091357   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:20.100986   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:20.101071   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:20.110345   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:20.119778   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:20.281637   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.006838   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.234671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.335720   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.437522   86402 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:21.437615   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:21.929963   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:21.930522   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:21.930552   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:21.930461   87562 retry.go:31] will retry after 2.171964123s: waiting for machine to come up
	I1104 12:08:24.103844   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:24.104303   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:24.104326   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:24.104257   87562 retry.go:31] will retry after 2.838813818s: waiting for machine to come up
	I1104 12:08:22.056858   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:24.057127   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:23.351405   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:25.850834   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:21.938086   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.438198   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.938624   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.438021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.938119   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.438470   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.937687   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.438045   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.937696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.438585   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.944977   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:26.945367   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:26.945395   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:26.945349   87562 retry.go:31] will retry after 2.799785534s: waiting for machine to come up
	I1104 12:08:29.746349   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746747   85500 main.go:141] libmachine: (no-preload-908370) Found IP for machine: 192.168.61.91
	I1104 12:08:29.746774   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has current primary IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746779   85500 main.go:141] libmachine: (no-preload-908370) Reserving static IP address...
	I1104 12:08:29.747195   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.747218   85500 main.go:141] libmachine: (no-preload-908370) Reserved static IP address: 192.168.61.91
	I1104 12:08:29.747234   85500 main.go:141] libmachine: (no-preload-908370) DBG | skip adding static IP to network mk-no-preload-908370 - found existing host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"}
	I1104 12:08:29.747248   85500 main.go:141] libmachine: (no-preload-908370) DBG | Getting to WaitForSSH function...
	I1104 12:08:29.747258   85500 main.go:141] libmachine: (no-preload-908370) Waiting for SSH to be available...
	I1104 12:08:29.749405   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749694   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.749728   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749887   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH client type: external
	I1104 12:08:29.749908   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa (-rw-------)
	I1104 12:08:29.749933   85500 main.go:141] libmachine: (no-preload-908370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:29.749951   85500 main.go:141] libmachine: (no-preload-908370) DBG | About to run SSH command:
	I1104 12:08:29.749966   85500 main.go:141] libmachine: (no-preload-908370) DBG | exit 0
	I1104 12:08:29.873121   85500 main.go:141] libmachine: (no-preload-908370) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:29.873472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetConfigRaw
	I1104 12:08:29.874081   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:29.876737   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877127   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.877155   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877473   85500 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/config.json ...
	I1104 12:08:29.877717   85500 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:29.877740   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:29.877936   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.880272   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880522   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.880543   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.880883   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881048   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.881338   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.881511   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.881524   85500 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:29.989431   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:29.989460   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989725   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:08:29.989757   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989974   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.992679   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993028   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.993057   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993222   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.993425   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993553   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993683   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.993817   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.994000   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.994016   85500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-908370 && echo "no-preload-908370" | sudo tee /etc/hostname
	I1104 12:08:30.118321   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-908370
	
	I1104 12:08:30.118361   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.121095   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121475   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.121509   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121697   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.121866   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122040   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122176   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.122343   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.122525   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.122547   85500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-908370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-908370/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-908370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:26.557368   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:29.056377   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:28.349510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:30.350431   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:26.937831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.938240   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.438463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.937958   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.437676   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.938298   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.937953   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.438075   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.237340   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:30.237370   85500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:30.237413   85500 buildroot.go:174] setting up certificates
	I1104 12:08:30.237429   85500 provision.go:84] configureAuth start
	I1104 12:08:30.237446   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:30.237725   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:30.240026   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240350   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.240380   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.242777   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243101   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.243119   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243302   85500 provision.go:143] copyHostCerts
	I1104 12:08:30.243358   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:30.243368   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:30.243427   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:30.243532   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:30.243542   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:30.243565   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:30.243635   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:30.243643   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:30.243661   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:30.243719   85500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.no-preload-908370 san=[127.0.0.1 192.168.61.91 localhost minikube no-preload-908370]
	I1104 12:08:30.515270   85500 provision.go:177] copyRemoteCerts
	I1104 12:08:30.515350   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:30.515381   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.518651   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519188   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.519218   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519420   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.519600   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.519777   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.519896   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:30.603170   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:30.626226   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:30.649353   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:08:30.684759   85500 provision.go:87] duration metric: took 447.313588ms to configureAuth
	I1104 12:08:30.684789   85500 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:30.684962   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:30.685029   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.687429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.687815   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.687840   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.688015   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.688192   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688325   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688471   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.688640   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.688830   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.688848   85500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:30.919118   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:30.919142   85500 machine.go:96] duration metric: took 1.041410402s to provisionDockerMachine
	I1104 12:08:30.919156   85500 start.go:293] postStartSetup for "no-preload-908370" (driver="kvm2")
	I1104 12:08:30.919169   85500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:30.919200   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:30.919513   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:30.919538   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.922075   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922485   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.922510   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922615   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.922823   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.922991   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.923107   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.007598   85500 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:31.011558   85500 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:31.011588   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:31.011665   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:31.011766   85500 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:31.011859   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:31.020788   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:31.044379   85500 start.go:296] duration metric: took 125.209775ms for postStartSetup
	I1104 12:08:31.044414   85500 fix.go:56] duration metric: took 19.154609071s for fixHost
	I1104 12:08:31.044442   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.047152   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047426   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.047461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047639   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.047829   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.047976   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.048138   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.048296   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:31.048464   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:31.048474   85500 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:31.157723   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722111.115015995
	
	I1104 12:08:31.157747   85500 fix.go:216] guest clock: 1730722111.115015995
	I1104 12:08:31.157758   85500 fix.go:229] Guest: 2024-11-04 12:08:31.115015995 +0000 UTC Remote: 2024-11-04 12:08:31.044427312 +0000 UTC m=+350.890212897 (delta=70.588683ms)
	I1104 12:08:31.157829   85500 fix.go:200] guest clock delta is within tolerance: 70.588683ms
	I1104 12:08:31.157841   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 19.268070408s
	I1104 12:08:31.157875   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.158131   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:31.160806   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161159   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.161191   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161371   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.161907   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162092   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162174   85500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:31.162217   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.162444   85500 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:31.162470   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.165069   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165316   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165505   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165656   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.165771   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165795   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165842   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166006   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.166024   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166183   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.166327   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166449   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.267746   85500 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:31.273307   85500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:31.410198   85500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:31.416652   85500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:31.416726   85500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:31.432260   85500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:31.432288   85500 start.go:495] detecting cgroup driver to use...
	I1104 12:08:31.432345   85500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:31.453134   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:31.467457   85500 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:31.467516   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:31.481392   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:31.495740   85500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:31.617549   85500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:31.802455   85500 docker.go:233] disabling docker service ...
	I1104 12:08:31.802511   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:31.815534   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:31.827495   85500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:31.938344   85500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:32.042827   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:32.056126   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:32.074274   85500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:08:32.074337   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.084061   85500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:32.084138   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.093533   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.104351   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.113753   85500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:32.123391   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.133089   85500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.149073   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.159888   85500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:32.169208   85500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:32.169279   85500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:32.181319   85500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:32.192472   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:32.300710   85500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:32.386906   85500 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:32.386980   85500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:32.391498   85500 start.go:563] Will wait 60s for crictl version
	I1104 12:08:32.391554   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.395471   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:32.439094   85500 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:32.439168   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.466609   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.499305   85500 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:08:32.500825   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:32.503461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.503827   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:32.503857   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.504039   85500 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:32.508082   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:32.520202   85500 kubeadm.go:883] updating cluster {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:32.520359   85500 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:08:32.520402   85500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:32.553752   85500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:08:32.553781   85500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.553868   85500 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.553853   85500 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.553886   85500 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1104 12:08:32.553925   85500 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.553969   85500 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.553978   85500 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555506   85500 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.555518   85500 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.555510   85500 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.555513   85500 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555591   85500 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.555601   85500 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.555514   85500 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.555658   85500 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1104 12:08:32.706982   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.707334   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.712904   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.721917   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.727829   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.741130   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1104 12:08:32.743716   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.796406   85500 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1104 12:08:32.796448   85500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.796502   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.814658   85500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1104 12:08:32.814697   85500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.814735   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.828308   85500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1104 12:08:32.828362   85500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.828416   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.882090   85500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1104 12:08:32.882140   85500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.882205   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.886473   85500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1104 12:08:32.886518   85500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.886567   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956331   85500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1104 12:08:32.956394   85500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.956414   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.956462   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.956427   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.956521   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.956425   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956506   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061683   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.061723   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061752   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.061790   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.061836   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.061893   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168519   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168596   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.187540   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.188933   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.189015   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.199281   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.285086   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1104 12:08:33.285145   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1104 12:08:33.285245   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:33.285247   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.307647   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1104 12:08:33.307769   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:33.307784   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1104 12:08:33.307818   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.307869   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:33.312697   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1104 12:08:33.312808   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:33.314341   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1104 12:08:33.314358   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314396   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314535   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1104 12:08:33.319449   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1104 12:08:33.319604   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1104 12:08:33.356390   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1104 12:08:33.356478   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1104 12:08:33.356569   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:33.512915   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:31.057314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:33.059599   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:32.350656   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:34.352338   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:31.938577   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.438561   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.938188   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.437856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.938433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.438381   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.938164   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.438120   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.937802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.438365   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.736963   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.42254522s)
	I1104 12:08:35.736994   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1104 12:08:35.737014   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737027   85500 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.380435224s)
	I1104 12:08:35.737058   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1104 12:08:35.737063   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737104   85500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.224165247s)
	I1104 12:08:35.737156   85500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1104 12:08:35.737191   85500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.737267   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:37.693026   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.955928101s)
	I1104 12:08:37.693065   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1104 12:08:37.693086   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:37.693047   85500 ssh_runner.go:235] Completed: which crictl: (1.955763498s)
	I1104 12:08:37.693168   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:37.693131   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:39.156860   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.463570619s)
	I1104 12:08:39.156894   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1104 12:08:39.156922   85500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156930   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.463741565s)
	I1104 12:08:39.156980   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156998   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.625930   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.057567   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.850619   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.851157   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:40.852272   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.938295   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.437646   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.438623   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.938662   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.938048   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.438404   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.938494   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.437875   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.701724   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.544718982s)
	I1104 12:08:42.701751   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1104 12:08:42.701771   85500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701810   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701826   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.544784275s)
	I1104 12:08:42.701912   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:44.666599   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.964646885s)
	I1104 12:08:44.666653   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1104 12:08:44.666723   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.964896366s)
	I1104 12:08:44.666744   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1104 12:08:44.666748   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:44.666765   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.666807   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.671475   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1104 12:08:40.556827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:42.557662   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.058481   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:43.351505   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.851360   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:41.938001   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.438702   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.938239   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.438469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.437744   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.938478   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.437757   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.938035   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.438173   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.627407   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.960571593s)
	I1104 12:08:46.627437   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1104 12:08:46.627473   85500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:46.627537   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:47.273537   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1104 12:08:47.273578   85500 cache_images.go:123] Successfully loaded all cached images
	I1104 12:08:47.273583   85500 cache_images.go:92] duration metric: took 14.719789832s to LoadCachedImages
	I1104 12:08:47.273594   85500 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.31.2 crio true true} ...
	I1104 12:08:47.273686   85500 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-908370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:47.273747   85500 ssh_runner.go:195] Run: crio config
	I1104 12:08:47.319888   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:47.319916   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:47.319929   85500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:47.319952   85500 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-908370 NodeName:no-preload-908370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:08:47.320098   85500 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-908370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:47.320185   85500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:08:47.330284   85500 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:47.330352   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:47.340015   85500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1104 12:08:47.356601   85500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:47.371327   85500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1104 12:08:47.387251   85500 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:47.391041   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:47.402283   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:47.527723   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:47.544017   85500 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370 for IP: 192.168.61.91
	I1104 12:08:47.544041   85500 certs.go:194] generating shared ca certs ...
	I1104 12:08:47.544060   85500 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:47.544244   85500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:47.544309   85500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:47.544322   85500 certs.go:256] generating profile certs ...
	I1104 12:08:47.544412   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.key
	I1104 12:08:47.544485   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key.890cb7f7
	I1104 12:08:47.544522   85500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key
	I1104 12:08:47.544626   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:47.544654   85500 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:47.544663   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:47.544685   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:47.544706   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:47.544726   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:47.544774   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:47.545439   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:47.588488   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:47.631341   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:47.666571   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:47.698703   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 12:08:47.725285   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:08:47.748890   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:47.775589   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:08:47.799507   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:47.823383   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:47.847515   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:47.869937   85500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:47.886413   85500 ssh_runner.go:195] Run: openssl version
	I1104 12:08:47.892041   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:47.901942   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906128   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906182   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.911506   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:47.921614   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:47.932358   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936742   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936801   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.942544   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:47.953063   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:47.963293   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967487   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967547   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.972898   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:47.983089   85500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:47.987532   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:47.993296   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:47.999021   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:48.004741   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:48.010227   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:48.015795   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:48.021356   85500 kubeadm.go:392] StartCluster: {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:48.021431   85500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:48.021471   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.057729   85500 cri.go:89] found id: ""
	I1104 12:08:48.057805   85500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:48.067591   85500 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:48.067610   85500 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:48.067663   85500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:48.076604   85500 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:48.077987   85500 kubeconfig.go:125] found "no-preload-908370" server: "https://192.168.61.91:8443"
	I1104 12:08:48.080042   85500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:48.089796   85500 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.91
	I1104 12:08:48.089826   85500 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:48.089838   85500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:48.089886   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.126920   85500 cri.go:89] found id: ""
	I1104 12:08:48.126998   85500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:48.143409   85500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:48.152783   85500 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:48.152809   85500 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:48.152858   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:48.161458   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:48.161542   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:48.170361   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:48.179217   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:48.179272   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:48.187834   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.196025   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:48.196079   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.204809   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:48.213280   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:48.213338   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:48.222672   85500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:48.232374   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:48.328999   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:49.920988   85500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.591954434s)
	I1104 12:08:49.921028   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.121679   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.181412   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:47.558137   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:49.559576   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:48.349974   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:50.350855   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:46.938016   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.438229   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.437950   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.437785   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.438413   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.938514   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.438658   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.253614   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:50.253693   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.754467   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.254553   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.271229   85500 api_server.go:72] duration metric: took 1.017613016s to wait for apiserver process to appear ...
	I1104 12:08:51.271255   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:51.271278   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:51.271794   85500 api_server.go:269] stopped: https://192.168.61.91:8443/healthz: Get "https://192.168.61.91:8443/healthz": dial tcp 192.168.61.91:8443: connect: connection refused
	I1104 12:08:51.771551   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.499268   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:54.499296   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:54.499310   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.617672   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.617699   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:54.771942   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.776588   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.776615   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:52.056678   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.057081   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:55.272332   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.276594   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:55.276621   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:55.771423   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.776881   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:08:55.783842   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:55.783869   85500 api_server.go:131] duration metric: took 4.512606898s to wait for apiserver health ...
	I1104 12:08:55.783877   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:55.783883   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:55.785665   85500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:52.351019   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.850354   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:51.938323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.438464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.937754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.938586   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.438391   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.938546   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.438433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.787083   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:55.801764   85500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:55.828371   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:55.847602   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:55.847653   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:55.847666   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:55.847679   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:55.847695   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:55.847707   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:55.847724   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:55.847733   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:55.847743   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:55.847753   85500 system_pods.go:74] duration metric: took 19.357387ms to wait for pod list to return data ...
	I1104 12:08:55.847762   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:55.856783   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:55.856820   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:55.856834   85500 node_conditions.go:105] duration metric: took 9.065755ms to run NodePressure ...
	I1104 12:08:55.856856   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:56.143012   85500 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148006   85500 kubeadm.go:739] kubelet initialised
	I1104 12:08:56.148026   85500 kubeadm.go:740] duration metric: took 4.987292ms waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148034   85500 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:56.152359   85500 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.156700   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156725   85500 pod_ready.go:82] duration metric: took 4.341093ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.156734   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156741   85500 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.161402   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161431   85500 pod_ready.go:82] duration metric: took 4.681838ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.161440   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161447   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.165738   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165756   85500 pod_ready.go:82] duration metric: took 4.301197ms for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.165764   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165770   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.232568   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232598   85500 pod_ready.go:82] duration metric: took 66.818411ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.232610   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232620   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.633774   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633804   85500 pod_ready.go:82] duration metric: took 401.173552ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.633815   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633824   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.032392   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032419   85500 pod_ready.go:82] duration metric: took 398.58729ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.032431   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032439   85500 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.431940   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431976   85500 pod_ready.go:82] duration metric: took 399.525162ms for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.431987   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431997   85500 pod_ready.go:39] duration metric: took 1.283953089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:57.432014   85500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:57.444821   85500 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:57.444845   85500 kubeadm.go:597] duration metric: took 9.377227288s to restartPrimaryControlPlane
	I1104 12:08:57.444857   85500 kubeadm.go:394] duration metric: took 9.423506415s to StartCluster
	I1104 12:08:57.444879   85500 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.444965   85500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:57.446715   85500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.446981   85500 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:57.447059   85500 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:57.447172   85500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-908370"
	I1104 12:08:57.447193   85500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-908370"
	W1104 12:08:57.447202   85500 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:57.447207   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:57.447237   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447234   85500 addons.go:69] Setting default-storageclass=true in profile "no-preload-908370"
	I1104 12:08:57.447321   85500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-908370"
	I1104 12:08:57.447222   85500 addons.go:69] Setting metrics-server=true in profile "no-preload-908370"
	I1104 12:08:57.447418   85500 addons.go:234] Setting addon metrics-server=true in "no-preload-908370"
	W1104 12:08:57.447431   85500 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:57.447461   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447708   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447792   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447813   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447748   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447896   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447853   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.449013   85500 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:57.450774   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:57.469657   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I1104 12:08:57.470180   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.470801   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.470830   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.471277   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.471873   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.471924   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.485026   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1104 12:08:57.485330   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1104 12:08:57.485604   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.485772   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.486328   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486363   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486442   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486471   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486735   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.486847   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.487059   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.487337   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.487401   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.490138   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I1104 12:08:57.490611   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.490705   85500 addons.go:234] Setting addon default-storageclass=true in "no-preload-908370"
	W1104 12:08:57.490724   85500 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:57.490748   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.491098   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.491140   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.491153   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.491177   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.491549   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.491718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.493600   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.495883   85500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:57.497200   85500 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.497217   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:57.497245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.500402   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.500934   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.500960   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.501276   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.501483   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.501626   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.501775   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.508615   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I1104 12:08:57.509102   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.509582   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.509606   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.509948   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.510115   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.510809   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1104 12:08:57.511134   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.511818   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.511836   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.511868   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.512486   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.513456   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.513500   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.513921   85500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:57.515417   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:57.515434   85500 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:57.515461   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.518867   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519216   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.519241   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519334   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.519523   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.519662   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.520124   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.529448   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I1104 12:08:57.529979   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.530374   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.530389   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.530756   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.530889   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.532430   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.532832   85500 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.532843   85500 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:57.532857   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.535429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535783   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.535809   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535953   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.536148   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.536245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.536388   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.635571   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:57.654984   85500 node_ready.go:35] waiting up to 6m0s for node "no-preload-908370" to be "Ready" ...
	I1104 12:08:57.722564   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.768850   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.791069   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:57.791090   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:57.875966   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:57.875997   85500 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:57.929834   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:57.929867   85500 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:58.017927   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:58.732204   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732235   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732586   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.732614   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.732624   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732635   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732640   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.733045   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.733108   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.733084   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.736737   85500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014142064s)
	I1104 12:08:58.736783   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.736793   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737035   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737077   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.737090   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.737100   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737737   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.737756   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737770   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.740716   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.740735   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.740963   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.740974   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.740985   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987200   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987227   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987657   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.987667   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.987676   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987685   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987708   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987991   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.988006   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.988018   85500 addons.go:475] Verifying addon metrics-server=true in "no-preload-908370"
	I1104 12:08:58.989756   85500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:58.991022   85500 addons.go:510] duration metric: took 1.54397104s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:59.659284   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.057497   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.057767   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.850793   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.852058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.938312   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.437920   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.937779   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.438511   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.938464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.438108   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.438356   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.158318   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:04.658719   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:05.159526   85500 node_ready.go:49] node "no-preload-908370" has status "Ready":"True"
	I1104 12:09:05.159553   85500 node_ready.go:38] duration metric: took 7.504528904s for node "no-preload-908370" to be "Ready" ...
	I1104 12:09:05.159564   85500 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:09:05.164838   85500 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173888   85500 pod_ready.go:93] pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.173909   85500 pod_ready.go:82] duration metric: took 9.046581ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173919   85500 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:00.556225   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:02.556893   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:05.055827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.351472   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:03.851990   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.938694   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.938445   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.438137   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.937941   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.937760   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.438704   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.680754   85500 pod_ready.go:93] pod "etcd-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.680778   85500 pod_ready.go:82] duration metric: took 506.849735ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.680804   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:07.687108   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:09.687377   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:07.556024   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.055613   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.351230   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:08.351640   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.850364   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.937956   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.438323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.438437   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.937675   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.437868   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.938703   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.438436   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.687315   85500 pod_ready.go:93] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.687338   85500 pod_ready.go:82] duration metric: took 5.006527478s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.687348   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692554   85500 pod_ready.go:93] pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.692583   85500 pod_ready.go:82] duration metric: took 5.227048ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692597   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697109   85500 pod_ready.go:93] pod "kube-proxy-w9hbz" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.697132   85500 pod_ready.go:82] duration metric: took 4.525205ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697153   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701450   85500 pod_ready.go:93] pod "kube-scheduler-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.701472   85500 pod_ready.go:82] duration metric: took 4.310973ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701483   85500 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:12.708631   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.708772   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.056161   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.556380   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.850721   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.851608   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:11.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.437963   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.938515   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.437754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.937856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.438729   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.938439   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.438421   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.938044   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.438456   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.209025   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.707595   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.056226   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.555918   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.350266   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.350329   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:16.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.438266   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.938153   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.437829   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.938469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.438336   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.938284   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.438073   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.937894   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:21.438135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:21.438238   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:21.471463   86402 cri.go:89] found id: ""
	I1104 12:09:21.471495   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.471507   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:21.471515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:21.471568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:21.509336   86402 cri.go:89] found id: ""
	I1104 12:09:21.509363   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.509373   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:21.509381   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:21.509441   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:21.545963   86402 cri.go:89] found id: ""
	I1104 12:09:21.545987   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.545995   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:21.546000   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:21.546059   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:21.580707   86402 cri.go:89] found id: ""
	I1104 12:09:21.580737   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.580748   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:21.580755   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:21.580820   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:21.613763   86402 cri.go:89] found id: ""
	I1104 12:09:21.613791   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.613801   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:21.613809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:21.613872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:21.646559   86402 cri.go:89] found id: ""
	I1104 12:09:21.646583   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.646591   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:21.646597   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:21.646643   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:21.681439   86402 cri.go:89] found id: ""
	I1104 12:09:21.681467   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.681479   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:21.681486   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:21.681554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:21.708045   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.207686   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:22.055637   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.056458   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.350636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:23.850852   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.713875   86402 cri.go:89] found id: ""
	I1104 12:09:21.713899   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.713907   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:21.713915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:21.713925   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:21.763882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:21.763918   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:21.778590   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:21.778615   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:21.892208   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:21.892235   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:21.892250   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:21.965946   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:21.965984   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:24.502992   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:24.514899   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:24.514960   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:24.554466   86402 cri.go:89] found id: ""
	I1104 12:09:24.554491   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.554501   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:24.554510   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:24.554567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:24.591532   86402 cri.go:89] found id: ""
	I1104 12:09:24.591560   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.591572   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:24.591580   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:24.591638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:24.625436   86402 cri.go:89] found id: ""
	I1104 12:09:24.625467   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.625478   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:24.625485   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:24.625544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:24.658317   86402 cri.go:89] found id: ""
	I1104 12:09:24.658346   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.658357   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:24.658364   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:24.658426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:24.692811   86402 cri.go:89] found id: ""
	I1104 12:09:24.692839   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.692850   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:24.692857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:24.692917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:24.729677   86402 cri.go:89] found id: ""
	I1104 12:09:24.729708   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.729719   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:24.729726   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:24.729773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:24.768575   86402 cri.go:89] found id: ""
	I1104 12:09:24.768598   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.768608   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:24.768615   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:24.768681   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:24.802344   86402 cri.go:89] found id: ""
	I1104 12:09:24.802368   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.802375   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:24.802383   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:24.802394   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:24.855882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:24.855915   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:24.869199   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:24.869243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:24.940720   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:24.940744   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:24.940758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:25.016139   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:25.016177   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:26.208422   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.208568   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.557513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:29.055769   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.350171   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.353001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:30.851153   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:27.553297   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:27.566857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:27.566913   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:27.599606   86402 cri.go:89] found id: ""
	I1104 12:09:27.599641   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.599653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:27.599661   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:27.599721   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:27.633818   86402 cri.go:89] found id: ""
	I1104 12:09:27.633841   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.633849   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:27.633854   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:27.633907   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:27.668088   86402 cri.go:89] found id: ""
	I1104 12:09:27.668120   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.668129   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:27.668135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:27.668185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:27.699401   86402 cri.go:89] found id: ""
	I1104 12:09:27.699433   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.699445   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:27.699453   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:27.699511   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:27.731422   86402 cri.go:89] found id: ""
	I1104 12:09:27.731448   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.731459   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:27.731466   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:27.731528   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:27.762808   86402 cri.go:89] found id: ""
	I1104 12:09:27.762839   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.762850   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:27.762857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:27.762917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:27.794729   86402 cri.go:89] found id: ""
	I1104 12:09:27.794757   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.794765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:27.794771   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:27.794826   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:27.825694   86402 cri.go:89] found id: ""
	I1104 12:09:27.825716   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.825724   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:27.825731   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:27.825742   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.862111   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:27.862140   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:27.911169   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:27.911204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:27.924207   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:27.924232   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:27.995123   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:27.995153   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:27.995167   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:30.580831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:30.594901   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:30.594959   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:30.630936   86402 cri.go:89] found id: ""
	I1104 12:09:30.630961   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.630971   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:30.630979   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:30.631034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:30.669288   86402 cri.go:89] found id: ""
	I1104 12:09:30.669311   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.669320   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:30.669328   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:30.669388   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:30.706288   86402 cri.go:89] found id: ""
	I1104 12:09:30.706312   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.706319   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:30.706325   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:30.706384   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:30.739027   86402 cri.go:89] found id: ""
	I1104 12:09:30.739057   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.739069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:30.739078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:30.739137   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:30.772247   86402 cri.go:89] found id: ""
	I1104 12:09:30.772272   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.772280   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:30.772286   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:30.772338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:30.810327   86402 cri.go:89] found id: ""
	I1104 12:09:30.810360   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.810370   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:30.810375   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:30.810426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:30.842241   86402 cri.go:89] found id: ""
	I1104 12:09:30.842271   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.842279   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:30.842285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:30.842332   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:30.877003   86402 cri.go:89] found id: ""
	I1104 12:09:30.877032   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.877043   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:30.877052   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:30.877077   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:30.925783   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:30.925816   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:30.939651   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:30.939680   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:31.029176   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:31.029210   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:31.029244   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:31.116311   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:31.116348   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:30.708451   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:32.708661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:31.056627   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.056743   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.057986   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.350420   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.351206   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.653267   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:33.665813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:33.665878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:33.701812   86402 cri.go:89] found id: ""
	I1104 12:09:33.701839   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.701852   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:33.701860   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:33.701922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:33.738816   86402 cri.go:89] found id: ""
	I1104 12:09:33.738850   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.738861   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:33.738868   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:33.738928   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:33.773936   86402 cri.go:89] found id: ""
	I1104 12:09:33.773960   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.773968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:33.773976   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:33.774031   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:33.808049   86402 cri.go:89] found id: ""
	I1104 12:09:33.808079   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.808091   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:33.808098   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:33.808154   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:33.844276   86402 cri.go:89] found id: ""
	I1104 12:09:33.844303   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.844314   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:33.844322   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:33.844443   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:33.879736   86402 cri.go:89] found id: ""
	I1104 12:09:33.879772   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.879782   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:33.879788   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:33.879843   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:33.913717   86402 cri.go:89] found id: ""
	I1104 12:09:33.913750   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.913761   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:33.913769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:33.913832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:33.949632   86402 cri.go:89] found id: ""
	I1104 12:09:33.949658   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.949667   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:33.949677   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:33.949691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:34.019770   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:34.019790   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:34.019806   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:34.101493   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:34.101524   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:34.146723   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:34.146751   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:34.196295   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:34.196338   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:35.207223   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.207576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.208091   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.556228   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.556548   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.850907   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.852870   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:36.709951   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:36.724723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:36.724782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:36.777406   86402 cri.go:89] found id: ""
	I1104 12:09:36.777440   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.777451   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:36.777459   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:36.777520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:36.834486   86402 cri.go:89] found id: ""
	I1104 12:09:36.834516   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.834527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:36.834535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:36.834641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:36.868828   86402 cri.go:89] found id: ""
	I1104 12:09:36.868853   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.868861   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:36.868867   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:36.868912   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:36.900942   86402 cri.go:89] found id: ""
	I1104 12:09:36.900972   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.900980   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:36.900986   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:36.901043   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:36.933215   86402 cri.go:89] found id: ""
	I1104 12:09:36.933265   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.933276   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:36.933282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:36.933330   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:36.966753   86402 cri.go:89] found id: ""
	I1104 12:09:36.966776   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.966784   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:36.966789   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:36.966850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:37.000050   86402 cri.go:89] found id: ""
	I1104 12:09:37.000074   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.000082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:37.000087   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:37.000144   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:37.033252   86402 cri.go:89] found id: ""
	I1104 12:09:37.033283   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.033295   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:37.033305   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:37.033328   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:37.085351   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:37.085383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:37.098556   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:37.098582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:37.167489   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:37.167512   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:37.167525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:37.243292   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:37.243325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:39.781468   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:39.795630   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:39.795756   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:39.833745   86402 cri.go:89] found id: ""
	I1104 12:09:39.833779   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.833791   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:39.833798   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:39.833862   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:39.870075   86402 cri.go:89] found id: ""
	I1104 12:09:39.870096   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.870106   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:39.870119   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:39.870173   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:39.905807   86402 cri.go:89] found id: ""
	I1104 12:09:39.905836   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.905846   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:39.905854   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:39.905916   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:39.941890   86402 cri.go:89] found id: ""
	I1104 12:09:39.941914   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.941922   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:39.941932   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:39.941978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:39.979123   86402 cri.go:89] found id: ""
	I1104 12:09:39.979150   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.979159   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:39.979165   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:39.979220   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:40.014748   86402 cri.go:89] found id: ""
	I1104 12:09:40.014777   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.014785   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:40.014791   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:40.014882   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:40.049977   86402 cri.go:89] found id: ""
	I1104 12:09:40.050004   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.050014   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:40.050021   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:40.050100   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:40.085630   86402 cri.go:89] found id: ""
	I1104 12:09:40.085663   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.085674   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:40.085685   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:40.085701   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:40.166611   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:40.166650   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:40.203117   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:40.203155   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:40.256233   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:40.256267   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:40.270009   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:40.270042   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:40.338672   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:41.707618   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:43.708915   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.055555   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.060949   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.351562   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.851599   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.839402   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:42.852881   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:42.852947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:42.884587   86402 cri.go:89] found id: ""
	I1104 12:09:42.884614   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.884624   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:42.884631   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:42.884690   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:42.915286   86402 cri.go:89] found id: ""
	I1104 12:09:42.915316   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.915327   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:42.915337   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:42.915399   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:42.945827   86402 cri.go:89] found id: ""
	I1104 12:09:42.945857   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.945868   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:42.945875   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:42.945934   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:42.982662   86402 cri.go:89] found id: ""
	I1104 12:09:42.982693   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.982703   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:42.982712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:42.982788   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:43.015337   86402 cri.go:89] found id: ""
	I1104 12:09:43.015371   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.015382   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:43.015390   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:43.015453   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:43.048235   86402 cri.go:89] found id: ""
	I1104 12:09:43.048262   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.048270   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:43.048276   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:43.048351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:43.080636   86402 cri.go:89] found id: ""
	I1104 12:09:43.080668   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.080679   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:43.080687   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:43.080746   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:43.113986   86402 cri.go:89] found id: ""
	I1104 12:09:43.114011   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.114019   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:43.114027   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:43.114038   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:43.165356   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:43.165390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:43.179167   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:43.179200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:43.250054   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:43.250083   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:43.250098   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:43.328970   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:43.329002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:45.869879   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:45.883262   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:45.883359   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:45.921978   86402 cri.go:89] found id: ""
	I1104 12:09:45.922003   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.922011   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:45.922016   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:45.922076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:45.954668   86402 cri.go:89] found id: ""
	I1104 12:09:45.954697   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.954710   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:45.954717   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:45.954787   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:45.987793   86402 cri.go:89] found id: ""
	I1104 12:09:45.987826   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.987837   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:45.987845   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:45.987906   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:46.028517   86402 cri.go:89] found id: ""
	I1104 12:09:46.028550   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.028558   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:46.028563   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:46.028621   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:46.063832   86402 cri.go:89] found id: ""
	I1104 12:09:46.063859   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.063870   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:46.063878   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:46.063942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:46.099981   86402 cri.go:89] found id: ""
	I1104 12:09:46.100011   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.100027   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:46.100036   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:46.100169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:46.133060   86402 cri.go:89] found id: ""
	I1104 12:09:46.133083   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.133092   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:46.133099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:46.133165   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:46.170559   86402 cri.go:89] found id: ""
	I1104 12:09:46.170583   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.170591   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:46.170599   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:46.170610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:46.253202   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:46.253253   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:46.288468   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:46.288498   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:46.339322   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:46.339354   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:46.353020   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:46.353049   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:46.420328   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:46.208695   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:46.556598   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.057461   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:47.351225   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.352737   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.920709   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:48.933443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:48.933507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:48.964736   86402 cri.go:89] found id: ""
	I1104 12:09:48.964759   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.964770   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:48.964777   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:48.964837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:48.996646   86402 cri.go:89] found id: ""
	I1104 12:09:48.996670   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.996679   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:48.996684   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:48.996734   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:49.028899   86402 cri.go:89] found id: ""
	I1104 12:09:49.028942   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.028951   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:49.028957   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:49.029015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:49.065032   86402 cri.go:89] found id: ""
	I1104 12:09:49.065056   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.065064   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:49.065075   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:49.065120   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:49.097159   86402 cri.go:89] found id: ""
	I1104 12:09:49.097183   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.097191   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:49.097196   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:49.097269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:49.131578   86402 cri.go:89] found id: ""
	I1104 12:09:49.131608   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.131619   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:49.131626   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:49.131684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:49.164307   86402 cri.go:89] found id: ""
	I1104 12:09:49.164339   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.164358   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:49.164367   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:49.164430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:49.197171   86402 cri.go:89] found id: ""
	I1104 12:09:49.197199   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.197210   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:49.197220   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:49.197251   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:49.210327   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:49.210355   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:49.280226   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:49.280251   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:49.280262   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:49.367655   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:49.367691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:49.408424   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:49.408452   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:50.708963   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:53.207337   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.555800   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.055622   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.850949   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.350551   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.958148   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:51.970451   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:51.970521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:52.000896   86402 cri.go:89] found id: ""
	I1104 12:09:52.000929   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.000940   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:52.000948   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:52.001023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:52.034122   86402 cri.go:89] found id: ""
	I1104 12:09:52.034150   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.034161   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:52.034168   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:52.034227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:52.070834   86402 cri.go:89] found id: ""
	I1104 12:09:52.070872   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.070884   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:52.070891   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:52.070950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:52.103730   86402 cri.go:89] found id: ""
	I1104 12:09:52.103758   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.103766   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:52.103772   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:52.103832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:52.135980   86402 cri.go:89] found id: ""
	I1104 12:09:52.136006   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.136014   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:52.136020   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:52.136081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:52.168903   86402 cri.go:89] found id: ""
	I1104 12:09:52.168928   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.168936   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:52.168942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:52.169001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:52.199499   86402 cri.go:89] found id: ""
	I1104 12:09:52.199529   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.199539   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:52.199546   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:52.199610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:52.232566   86402 cri.go:89] found id: ""
	I1104 12:09:52.232603   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.232615   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:52.232626   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:52.232640   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:52.282140   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:52.282180   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:52.295079   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:52.295110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:52.364061   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:52.364087   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:52.364102   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:52.437868   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:52.437901   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:54.978182   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:54.991002   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:54.991068   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:55.023628   86402 cri.go:89] found id: ""
	I1104 12:09:55.023656   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.023663   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:55.023669   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:55.023715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:55.058524   86402 cri.go:89] found id: ""
	I1104 12:09:55.058548   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.058557   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:55.058564   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:55.058634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:55.095730   86402 cri.go:89] found id: ""
	I1104 12:09:55.095760   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.095772   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:55.095779   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:55.095837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:55.128341   86402 cri.go:89] found id: ""
	I1104 12:09:55.128365   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.128373   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:55.128379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:55.128438   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:55.160655   86402 cri.go:89] found id: ""
	I1104 12:09:55.160681   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.160693   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:55.160700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:55.160754   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:55.194050   86402 cri.go:89] found id: ""
	I1104 12:09:55.194077   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.194086   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:55.194091   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:55.194138   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:55.227655   86402 cri.go:89] found id: ""
	I1104 12:09:55.227694   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.227705   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:55.227712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:55.227810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:55.261106   86402 cri.go:89] found id: ""
	I1104 12:09:55.261137   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.261147   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:55.261157   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:55.261171   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:55.335577   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:55.335598   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:55.335610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:55.421339   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:55.421375   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:55.459936   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:55.459967   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:55.509346   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:55.509382   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:55.208869   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:57.707576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:59.708019   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.555996   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.556335   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.851071   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.851254   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.023608   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:58.036540   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:58.036599   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:58.075104   86402 cri.go:89] found id: ""
	I1104 12:09:58.075182   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.075198   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:58.075207   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:58.075271   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:58.109910   86402 cri.go:89] found id: ""
	I1104 12:09:58.109949   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.109961   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:58.109968   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:58.110038   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:58.142829   86402 cri.go:89] found id: ""
	I1104 12:09:58.142854   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.142865   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:58.142873   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:58.142924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:58.178125   86402 cri.go:89] found id: ""
	I1104 12:09:58.178153   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.178161   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:58.178168   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:58.178239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:58.214117   86402 cri.go:89] found id: ""
	I1104 12:09:58.214146   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.214156   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:58.214162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:58.214213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:58.244728   86402 cri.go:89] found id: ""
	I1104 12:09:58.244751   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.244759   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:58.244765   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:58.244809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:58.275542   86402 cri.go:89] found id: ""
	I1104 12:09:58.275568   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.275576   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:58.275582   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:58.275630   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:58.314909   86402 cri.go:89] found id: ""
	I1104 12:09:58.314935   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.314943   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:58.314952   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:58.314962   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:58.364361   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:58.364390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.378483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:58.378517   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:58.442012   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:58.442033   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:58.442045   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:58.517260   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:58.517298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.057203   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:01.069937   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:01.070008   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:01.101672   86402 cri.go:89] found id: ""
	I1104 12:10:01.101698   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.101709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:01.101716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:01.101779   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:01.134672   86402 cri.go:89] found id: ""
	I1104 12:10:01.134701   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.134712   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:01.134719   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:01.134789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:01.167784   86402 cri.go:89] found id: ""
	I1104 12:10:01.167833   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.167845   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:01.167853   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:01.167945   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:01.201218   86402 cri.go:89] found id: ""
	I1104 12:10:01.201260   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.201271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:01.201281   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:01.201338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:01.234964   86402 cri.go:89] found id: ""
	I1104 12:10:01.234991   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.235000   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:01.235007   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:01.235069   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:01.267809   86402 cri.go:89] found id: ""
	I1104 12:10:01.267848   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.267881   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:01.267890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:01.267942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:01.303567   86402 cri.go:89] found id: ""
	I1104 12:10:01.303590   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.303598   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:01.303604   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:01.303648   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:01.342059   86402 cri.go:89] found id: ""
	I1104 12:10:01.342088   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.342099   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:01.342109   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:01.342142   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:01.354845   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:01.354867   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:01.423426   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:01.423447   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:01.423459   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:01.498979   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:01.499018   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.537658   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:01.537691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:02.208192   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.209058   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.055266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.056457   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.350820   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.850435   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.088653   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:04.103506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:04.103576   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:04.137574   86402 cri.go:89] found id: ""
	I1104 12:10:04.137602   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.137612   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:04.137620   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:04.137684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:04.177624   86402 cri.go:89] found id: ""
	I1104 12:10:04.177662   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.177673   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:04.177681   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:04.177750   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:04.213829   86402 cri.go:89] found id: ""
	I1104 12:10:04.213850   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.213862   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:04.213870   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:04.213929   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:04.251112   86402 cri.go:89] found id: ""
	I1104 12:10:04.251143   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.251154   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:04.251162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:04.251227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:04.286005   86402 cri.go:89] found id: ""
	I1104 12:10:04.286036   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.286046   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:04.286053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:04.286118   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:04.317628   86402 cri.go:89] found id: ""
	I1104 12:10:04.317656   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.317667   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:04.317674   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:04.317742   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:04.351663   86402 cri.go:89] found id: ""
	I1104 12:10:04.351687   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.351695   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:04.351700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:04.351755   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:04.385818   86402 cri.go:89] found id: ""
	I1104 12:10:04.385842   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.385850   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:04.385858   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:04.385880   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:04.467141   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:04.467179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:04.503669   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:04.503700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.557237   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:04.557303   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:04.570484   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:04.570520   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:04.635099   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:06.708483   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:09.207171   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:05.556612   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.056976   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:06.350422   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.351537   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.351962   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:07.135741   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:07.148039   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:07.148132   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:07.185171   86402 cri.go:89] found id: ""
	I1104 12:10:07.185196   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.185205   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:07.185211   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:07.185280   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:07.217097   86402 cri.go:89] found id: ""
	I1104 12:10:07.217126   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.217137   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:07.217144   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:07.217204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:07.250079   86402 cri.go:89] found id: ""
	I1104 12:10:07.250108   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.250116   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:07.250121   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:07.250169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:07.283423   86402 cri.go:89] found id: ""
	I1104 12:10:07.283463   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.283475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:07.283482   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:07.283554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:07.316461   86402 cri.go:89] found id: ""
	I1104 12:10:07.316490   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.316507   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:07.316513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:07.316569   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:07.361981   86402 cri.go:89] found id: ""
	I1104 12:10:07.362010   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.362018   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:07.362024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:07.362087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:07.397834   86402 cri.go:89] found id: ""
	I1104 12:10:07.397867   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.397878   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:07.397886   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:07.397948   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:07.429379   86402 cri.go:89] found id: ""
	I1104 12:10:07.429407   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.429416   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:07.429425   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:07.429438   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:07.495294   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.495322   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:07.495334   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:07.578504   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:07.578546   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:07.617172   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:07.617201   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:07.667168   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:07.667204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.181802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:10.196017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:10.196084   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:10.228243   86402 cri.go:89] found id: ""
	I1104 12:10:10.228272   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.228282   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:10.228289   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:10.228347   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:10.262110   86402 cri.go:89] found id: ""
	I1104 12:10:10.262143   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.262152   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:10.262161   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:10.262218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:10.297776   86402 cri.go:89] found id: ""
	I1104 12:10:10.297812   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.297823   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:10.297830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:10.297877   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:10.332645   86402 cri.go:89] found id: ""
	I1104 12:10:10.332672   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.332680   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:10.332685   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:10.332730   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:10.366703   86402 cri.go:89] found id: ""
	I1104 12:10:10.366735   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.366746   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:10.366754   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:10.366809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:10.399500   86402 cri.go:89] found id: ""
	I1104 12:10:10.399526   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.399534   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:10.399539   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:10.399634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:10.434898   86402 cri.go:89] found id: ""
	I1104 12:10:10.434932   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.434943   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:10.434951   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:10.435022   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:10.472159   86402 cri.go:89] found id: ""
	I1104 12:10:10.472189   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.472201   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:10.472225   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:10.472246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:10.528710   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:10.528769   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.541943   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:10.541973   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:10.621819   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:10.621843   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:10.621855   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:10.698301   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:10.698335   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:11.208069   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.707594   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.556520   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.056160   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:15.056984   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:12.851001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:14.851591   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.235151   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:13.247511   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:13.247585   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:13.278546   86402 cri.go:89] found id: ""
	I1104 12:10:13.278576   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.278586   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:13.278592   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:13.278655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:13.310297   86402 cri.go:89] found id: ""
	I1104 12:10:13.310325   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.310334   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:13.310340   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:13.310394   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:13.344110   86402 cri.go:89] found id: ""
	I1104 12:10:13.344139   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.344150   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:13.344158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:13.344210   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:13.379778   86402 cri.go:89] found id: ""
	I1104 12:10:13.379806   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.379817   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:13.379824   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:13.379872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:13.411763   86402 cri.go:89] found id: ""
	I1104 12:10:13.411795   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.411806   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:13.411813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:13.411872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:13.445192   86402 cri.go:89] found id: ""
	I1104 12:10:13.445217   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.445235   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:13.445243   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:13.445297   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:13.478518   86402 cri.go:89] found id: ""
	I1104 12:10:13.478549   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.478561   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:13.478569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:13.478710   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:13.513852   86402 cri.go:89] found id: ""
	I1104 12:10:13.513878   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.513886   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:13.513895   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:13.513909   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:13.590413   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:13.590439   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:13.590454   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:13.664575   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:13.664608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.700616   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:13.700644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:13.751113   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:13.751147   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:16.264311   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:16.277443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:16.277508   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:16.309983   86402 cri.go:89] found id: ""
	I1104 12:10:16.310010   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.310020   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:16.310025   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:16.310073   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:16.358281   86402 cri.go:89] found id: ""
	I1104 12:10:16.358305   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.358312   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:16.358317   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:16.358376   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:16.394455   86402 cri.go:89] found id: ""
	I1104 12:10:16.394485   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.394497   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:16.394503   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:16.394571   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:16.430606   86402 cri.go:89] found id: ""
	I1104 12:10:16.430638   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.430648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:16.430655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:16.430716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:16.464402   86402 cri.go:89] found id: ""
	I1104 12:10:16.464439   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.464450   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:16.464458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:16.464517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:16.497985   86402 cri.go:89] found id: ""
	I1104 12:10:16.498009   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.498017   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:16.498022   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:16.498076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:16.531255   86402 cri.go:89] found id: ""
	I1104 12:10:16.531289   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.531301   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:16.531309   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:16.531372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:16.566176   86402 cri.go:89] found id: ""
	I1104 12:10:16.566204   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.566213   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:16.566228   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:16.566243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:16.634157   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:16.634196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:16.634218   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:16.206939   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:18.208360   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.555513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.556105   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.351026   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.351294   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:16.710518   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:16.710550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:16.746572   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:16.746608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:16.797146   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:16.797179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.310286   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:19.323409   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:19.323473   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:19.360864   86402 cri.go:89] found id: ""
	I1104 12:10:19.360893   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.360902   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:19.360907   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:19.360962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:19.400127   86402 cri.go:89] found id: ""
	I1104 12:10:19.400155   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.400167   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:19.400174   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:19.400230   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:19.433023   86402 cri.go:89] found id: ""
	I1104 12:10:19.433049   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.433057   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:19.433062   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:19.433123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:19.467786   86402 cri.go:89] found id: ""
	I1104 12:10:19.467810   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.467819   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:19.467825   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:19.467875   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:19.498411   86402 cri.go:89] found id: ""
	I1104 12:10:19.498436   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.498444   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:19.498455   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:19.498502   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:19.532146   86402 cri.go:89] found id: ""
	I1104 12:10:19.532171   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.532179   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:19.532184   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:19.532234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:19.567271   86402 cri.go:89] found id: ""
	I1104 12:10:19.567294   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.567302   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:19.567308   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:19.567369   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:19.608233   86402 cri.go:89] found id: ""
	I1104 12:10:19.608265   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.608279   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:19.608289   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:19.608304   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:19.649039   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:19.649071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:19.702129   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:19.702168   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.716749   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:19.716776   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:19.787538   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:19.787560   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:19.787572   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:20.208694   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.708289   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.556715   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.557173   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.851010   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.852944   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.368982   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:22.382889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:22.382962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:22.418672   86402 cri.go:89] found id: ""
	I1104 12:10:22.418698   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.418709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:22.418716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:22.418782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:22.451675   86402 cri.go:89] found id: ""
	I1104 12:10:22.451704   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.451715   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:22.451723   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:22.451785   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:22.488520   86402 cri.go:89] found id: ""
	I1104 12:10:22.488549   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.488561   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:22.488567   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:22.488631   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:22.530288   86402 cri.go:89] found id: ""
	I1104 12:10:22.530312   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.530321   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:22.530326   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:22.530382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:22.564929   86402 cri.go:89] found id: ""
	I1104 12:10:22.564958   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.564970   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:22.564977   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:22.565036   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:22.598015   86402 cri.go:89] found id: ""
	I1104 12:10:22.598042   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.598051   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:22.598056   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:22.598160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:22.632894   86402 cri.go:89] found id: ""
	I1104 12:10:22.632921   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.632930   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:22.632935   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:22.633001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:22.665194   86402 cri.go:89] found id: ""
	I1104 12:10:22.665218   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.665245   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:22.665257   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:22.665272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:22.717731   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:22.717763   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:22.732671   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:22.732698   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:22.823908   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:22.823946   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:22.823963   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.907812   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:22.907848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.449308   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:25.461694   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:25.461751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:25.493036   86402 cri.go:89] found id: ""
	I1104 12:10:25.493061   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.493068   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:25.493075   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:25.493122   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:25.525084   86402 cri.go:89] found id: ""
	I1104 12:10:25.525116   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.525128   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:25.525135   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:25.525196   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:25.561380   86402 cri.go:89] found id: ""
	I1104 12:10:25.561424   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.561436   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:25.561444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:25.561499   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:25.595429   86402 cri.go:89] found id: ""
	I1104 12:10:25.595453   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.595468   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:25.595474   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:25.595521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:25.627409   86402 cri.go:89] found id: ""
	I1104 12:10:25.627436   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.627445   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:25.627450   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:25.627497   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:25.661048   86402 cri.go:89] found id: ""
	I1104 12:10:25.661073   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.661082   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:25.661088   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:25.661135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:25.698882   86402 cri.go:89] found id: ""
	I1104 12:10:25.698912   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.698920   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:25.698926   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:25.698978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:25.733355   86402 cri.go:89] found id: ""
	I1104 12:10:25.733397   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.733409   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:25.733420   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:25.733435   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:25.784871   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:25.784908   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:25.798715   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:25.798740   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:25.870362   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:25.870383   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:25.870397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:25.950565   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:25.950598   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.209496   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:27.706991   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:29.708209   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.055597   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.055845   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:30.056584   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.351027   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.851204   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.488258   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:28.506058   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:28.506114   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:28.566325   86402 cri.go:89] found id: ""
	I1104 12:10:28.566351   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.566358   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:28.566364   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:28.566413   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:28.612753   86402 cri.go:89] found id: ""
	I1104 12:10:28.612781   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.612790   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:28.612796   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:28.612854   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:28.647082   86402 cri.go:89] found id: ""
	I1104 12:10:28.647109   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.647120   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:28.647128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:28.647205   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:28.683197   86402 cri.go:89] found id: ""
	I1104 12:10:28.683227   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.683239   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:28.683247   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:28.683299   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:28.718139   86402 cri.go:89] found id: ""
	I1104 12:10:28.718175   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.718186   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:28.718194   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:28.718253   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:28.749689   86402 cri.go:89] found id: ""
	I1104 12:10:28.749721   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.749732   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:28.749739   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:28.749803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:28.786824   86402 cri.go:89] found id: ""
	I1104 12:10:28.786851   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.786859   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:28.786864   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:28.786925   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:28.822833   86402 cri.go:89] found id: ""
	I1104 12:10:28.822856   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.822865   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:28.822872   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:28.822884   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:28.835267   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:28.835298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:28.900051   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:28.900076   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:28.900089   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:28.979867   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:28.979912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:29.017294   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:29.017327   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:31.569559   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:31.582065   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:31.582136   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:31.614924   86402 cri.go:89] found id: ""
	I1104 12:10:31.614952   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.614960   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:31.614966   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:31.615029   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:31.647178   86402 cri.go:89] found id: ""
	I1104 12:10:31.647204   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.647212   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:31.647218   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:31.647277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:31.678723   86402 cri.go:89] found id: ""
	I1104 12:10:31.678749   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.678761   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:31.678769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:31.678819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:31.709787   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.208234   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:32.555978   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.557026   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.351700   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:33.850976   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:35.851636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.713013   86402 cri.go:89] found id: ""
	I1104 12:10:31.713036   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.713043   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:31.713048   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:31.713092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:31.746564   86402 cri.go:89] found id: ""
	I1104 12:10:31.746591   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.746600   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:31.746605   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:31.746658   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:31.779559   86402 cri.go:89] found id: ""
	I1104 12:10:31.779586   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.779594   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:31.779601   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:31.779652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:31.812047   86402 cri.go:89] found id: ""
	I1104 12:10:31.812076   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.812087   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:31.812094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:31.812163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:31.845479   86402 cri.go:89] found id: ""
	I1104 12:10:31.845510   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.845522   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:31.845532   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:31.845551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:31.909399   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:31.909423   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:31.909434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:31.985994   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:31.986031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:32.023222   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:32.023255   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:32.074429   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:32.074467   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.588202   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:34.600925   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:34.600994   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:34.632718   86402 cri.go:89] found id: ""
	I1104 12:10:34.632743   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.632754   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:34.632763   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:34.632813   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:34.665553   86402 cri.go:89] found id: ""
	I1104 12:10:34.665576   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.665585   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:34.665590   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:34.665641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:34.700059   86402 cri.go:89] found id: ""
	I1104 12:10:34.700081   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.700089   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:34.700094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:34.700141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:34.732940   86402 cri.go:89] found id: ""
	I1104 12:10:34.732962   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.732970   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:34.732978   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:34.733023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:34.764580   86402 cri.go:89] found id: ""
	I1104 12:10:34.764610   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.764618   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:34.764624   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:34.764680   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:34.798030   86402 cri.go:89] found id: ""
	I1104 12:10:34.798053   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.798061   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:34.798067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:34.798115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:34.829847   86402 cri.go:89] found id: ""
	I1104 12:10:34.829876   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.829884   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:34.829889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:34.829946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:34.862764   86402 cri.go:89] found id: ""
	I1104 12:10:34.862792   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.862804   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:34.862815   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:34.862828   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:34.912367   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:34.912397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.925347   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:34.925383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:34.990459   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:34.990486   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:34.990502   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:35.066765   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:35.066796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:36.706912   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.707144   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.056279   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:39.555433   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.349986   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:40.354694   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.602696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:37.615041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:37.615115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:37.646872   86402 cri.go:89] found id: ""
	I1104 12:10:37.646900   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.646911   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:37.646918   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:37.646977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:37.679770   86402 cri.go:89] found id: ""
	I1104 12:10:37.679797   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.679805   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:37.679810   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:37.679867   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:37.711693   86402 cri.go:89] found id: ""
	I1104 12:10:37.711720   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.711733   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:37.711743   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:37.711803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:37.746605   86402 cri.go:89] found id: ""
	I1104 12:10:37.746636   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.746648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:37.746656   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:37.746716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:37.778983   86402 cri.go:89] found id: ""
	I1104 12:10:37.779010   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.779020   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:37.779026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:37.779086   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:37.813293   86402 cri.go:89] found id: ""
	I1104 12:10:37.813321   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.813330   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:37.813335   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:37.813387   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:37.846181   86402 cri.go:89] found id: ""
	I1104 12:10:37.846209   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.846219   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:37.846226   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:37.846287   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:37.877485   86402 cri.go:89] found id: ""
	I1104 12:10:37.877520   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.877531   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:37.877541   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:37.877558   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:37.926704   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:37.926733   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:37.939771   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:37.939796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:38.003762   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:38.003783   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:38.003800   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:38.085419   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:38.085456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.625351   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:40.637380   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:40.637459   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:40.670274   86402 cri.go:89] found id: ""
	I1104 12:10:40.670303   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.670315   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:40.670322   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:40.670382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:40.703383   86402 cri.go:89] found id: ""
	I1104 12:10:40.703414   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.703427   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:40.703434   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:40.703481   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:40.739549   86402 cri.go:89] found id: ""
	I1104 12:10:40.739576   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.739586   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:40.739594   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:40.739651   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:40.775466   86402 cri.go:89] found id: ""
	I1104 12:10:40.775492   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.775502   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:40.775513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:40.775567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:40.810486   86402 cri.go:89] found id: ""
	I1104 12:10:40.810515   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.810525   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:40.810533   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:40.810593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:40.844277   86402 cri.go:89] found id: ""
	I1104 12:10:40.844309   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.844321   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:40.844329   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:40.844391   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:40.878699   86402 cri.go:89] found id: ""
	I1104 12:10:40.878728   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.878739   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:40.878746   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:40.878804   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:40.913888   86402 cri.go:89] found id: ""
	I1104 12:10:40.913913   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.913921   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:40.913929   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:40.913939   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:40.966854   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:40.966892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:40.980483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:40.980510   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:41.046059   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:41.046085   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:41.046100   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:41.129746   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:41.129779   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.707964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.207804   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.057019   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.555947   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.850057   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.851467   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.667029   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:43.680024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:43.680092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:43.714185   86402 cri.go:89] found id: ""
	I1104 12:10:43.714218   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.714227   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:43.714235   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:43.714294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:43.749493   86402 cri.go:89] found id: ""
	I1104 12:10:43.749515   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.749523   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:43.749529   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:43.749588   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:43.785400   86402 cri.go:89] found id: ""
	I1104 12:10:43.785426   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.785437   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:43.785444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:43.785507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:43.818465   86402 cri.go:89] found id: ""
	I1104 12:10:43.818505   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.818517   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:43.818524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:43.818573   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:43.850232   86402 cri.go:89] found id: ""
	I1104 12:10:43.850262   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.850272   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:43.850279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:43.850337   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:43.882806   86402 cri.go:89] found id: ""
	I1104 12:10:43.882840   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.882851   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:43.882859   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:43.882920   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:43.919449   86402 cri.go:89] found id: ""
	I1104 12:10:43.919476   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.919486   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:43.919493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:43.919556   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:43.953761   86402 cri.go:89] found id: ""
	I1104 12:10:43.953791   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.953801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:43.953812   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:43.953825   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:44.005559   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:44.005594   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:44.019431   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:44.019456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:44.094436   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:44.094457   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:44.094470   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:44.174026   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:44.174061   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:45.707449   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:47.709901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.557050   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:48.557552   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.851720   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:49.350269   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.712021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:46.724258   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:46.724318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:46.754472   86402 cri.go:89] found id: ""
	I1104 12:10:46.754501   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.754510   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:46.754515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:46.754563   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:46.790184   86402 cri.go:89] found id: ""
	I1104 12:10:46.790209   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.790219   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:46.790226   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:46.790284   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:46.824840   86402 cri.go:89] found id: ""
	I1104 12:10:46.824865   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.824875   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:46.824882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:46.824952   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:46.857295   86402 cri.go:89] found id: ""
	I1104 12:10:46.857329   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.857360   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:46.857369   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:46.857430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:46.889540   86402 cri.go:89] found id: ""
	I1104 12:10:46.889571   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.889582   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:46.889588   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:46.889652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:46.930165   86402 cri.go:89] found id: ""
	I1104 12:10:46.930195   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.930204   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:46.930210   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:46.930266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:46.965964   86402 cri.go:89] found id: ""
	I1104 12:10:46.965994   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.966006   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:46.966013   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:46.966060   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:47.002700   86402 cri.go:89] found id: ""
	I1104 12:10:47.002732   86402 logs.go:282] 0 containers: []
	W1104 12:10:47.002741   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:47.002749   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:47.002760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:47.056362   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:47.056392   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:47.070447   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:47.070472   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:47.143207   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:47.143240   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:47.143256   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:47.223985   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:47.224015   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:49.765870   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:49.778288   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:49.778352   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:49.812012   86402 cri.go:89] found id: ""
	I1104 12:10:49.812044   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.812054   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:49.812064   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:49.812115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:49.847260   86402 cri.go:89] found id: ""
	I1104 12:10:49.847290   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.847301   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:49.847308   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:49.847361   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:49.877397   86402 cri.go:89] found id: ""
	I1104 12:10:49.877419   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.877427   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:49.877432   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:49.877486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:49.912453   86402 cri.go:89] found id: ""
	I1104 12:10:49.912484   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.912499   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:49.912506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:49.912572   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:49.948374   86402 cri.go:89] found id: ""
	I1104 12:10:49.948404   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.948416   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:49.948422   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:49.948488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:49.982190   86402 cri.go:89] found id: ""
	I1104 12:10:49.982216   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.982228   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:49.982236   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:49.982294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:50.014396   86402 cri.go:89] found id: ""
	I1104 12:10:50.014426   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.014437   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:50.014445   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:50.014507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:50.051770   86402 cri.go:89] found id: ""
	I1104 12:10:50.051793   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.051801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:50.051809   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:50.051820   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:50.116158   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:50.116185   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:50.116202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:50.194382   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:50.194431   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:50.235957   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:50.235983   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:50.290720   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:50.290750   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:50.207837   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.207972   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.208026   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.055965   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:53.056014   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:55.056318   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.850513   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.351193   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.805144   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:52.817686   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:52.817753   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:52.852470   86402 cri.go:89] found id: ""
	I1104 12:10:52.852492   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.852546   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:52.852559   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:52.852603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:52.889682   86402 cri.go:89] found id: ""
	I1104 12:10:52.889705   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.889714   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:52.889720   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:52.889773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:52.924490   86402 cri.go:89] found id: ""
	I1104 12:10:52.924525   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.924537   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:52.924544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:52.924604   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:52.957055   86402 cri.go:89] found id: ""
	I1104 12:10:52.957085   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.957094   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:52.957099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:52.957143   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:52.993379   86402 cri.go:89] found id: ""
	I1104 12:10:52.993411   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.993423   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:52.993430   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:52.993493   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:53.027365   86402 cri.go:89] found id: ""
	I1104 12:10:53.027398   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.027407   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:53.027412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:53.027488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:53.061048   86402 cri.go:89] found id: ""
	I1104 12:10:53.061074   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.061082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:53.061089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:53.061163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:53.101867   86402 cri.go:89] found id: ""
	I1104 12:10:53.101894   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.101904   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:53.101915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:53.101927   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:53.152314   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:53.152351   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:53.165630   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:53.165657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:53.239717   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:53.239739   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:53.239753   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:53.318140   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:53.318186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:55.857443   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:55.869524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:55.869608   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:55.900719   86402 cri.go:89] found id: ""
	I1104 12:10:55.900743   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.900753   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:55.900761   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:55.900821   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:55.932699   86402 cri.go:89] found id: ""
	I1104 12:10:55.932724   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.932734   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:55.932741   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:55.932798   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:55.964729   86402 cri.go:89] found id: ""
	I1104 12:10:55.964758   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.964767   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:55.964775   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:55.964823   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:55.997870   86402 cri.go:89] found id: ""
	I1104 12:10:55.997897   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.997907   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:55.997915   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:55.997977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:56.031707   86402 cri.go:89] found id: ""
	I1104 12:10:56.031736   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.031744   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:56.031749   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:56.031805   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:56.070839   86402 cri.go:89] found id: ""
	I1104 12:10:56.070863   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.070871   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:56.070877   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:56.070922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:56.109364   86402 cri.go:89] found id: ""
	I1104 12:10:56.109393   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.109404   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:56.109412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:56.109474   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:56.143369   86402 cri.go:89] found id: ""
	I1104 12:10:56.143402   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.143414   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:56.143424   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:56.143437   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:56.156924   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:56.156952   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:56.223624   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:56.223647   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:56.223659   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:56.302040   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:56.302082   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:56.343102   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:56.343150   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:56.209085   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.712250   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:57.056463   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:59.555744   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:56.850242   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.850955   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.896551   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:58.909034   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:58.909110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:58.944520   86402 cri.go:89] found id: ""
	I1104 12:10:58.944550   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.944559   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:58.944565   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:58.944612   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:58.980137   86402 cri.go:89] found id: ""
	I1104 12:10:58.980167   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.980176   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:58.980181   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:58.980231   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:59.014505   86402 cri.go:89] found id: ""
	I1104 12:10:59.014536   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.014545   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:59.014551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:59.014602   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:59.050616   86402 cri.go:89] found id: ""
	I1104 12:10:59.050642   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.050652   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:59.050659   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:59.050718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:59.084328   86402 cri.go:89] found id: ""
	I1104 12:10:59.084358   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.084369   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:59.084376   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:59.084449   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:59.116607   86402 cri.go:89] found id: ""
	I1104 12:10:59.116633   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.116642   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:59.116649   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:59.116711   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:59.149727   86402 cri.go:89] found id: ""
	I1104 12:10:59.149754   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.149765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:59.149773   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:59.149832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:59.182992   86402 cri.go:89] found id: ""
	I1104 12:10:59.183023   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.183035   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:59.183045   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:59.183059   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:59.234826   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:59.234862   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:59.248401   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:59.248427   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:59.317143   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:59.317171   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:59.317186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:59.397294   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:59.397336   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:01.208022   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.707297   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.556680   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:04.055902   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.350865   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.850510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.933617   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:01.946458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:01.946537   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:01.981652   86402 cri.go:89] found id: ""
	I1104 12:11:01.981682   86402 logs.go:282] 0 containers: []
	W1104 12:11:01.981693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:01.981701   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:01.981757   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:02.014245   86402 cri.go:89] found id: ""
	I1104 12:11:02.014273   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.014282   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:02.014287   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:02.014350   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:02.047386   86402 cri.go:89] found id: ""
	I1104 12:11:02.047409   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.047420   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:02.047427   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:02.047488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:02.086427   86402 cri.go:89] found id: ""
	I1104 12:11:02.086464   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.086475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:02.086483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:02.086544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:02.120219   86402 cri.go:89] found id: ""
	I1104 12:11:02.120246   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.120255   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:02.120260   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:02.120318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:02.153832   86402 cri.go:89] found id: ""
	I1104 12:11:02.153864   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.153876   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:02.153884   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:02.153950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:02.186237   86402 cri.go:89] found id: ""
	I1104 12:11:02.186266   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.186278   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:02.186285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:02.186351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:02.219238   86402 cri.go:89] found id: ""
	I1104 12:11:02.219269   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.219280   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:02.219290   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:02.219301   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:02.301062   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:02.301099   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:02.358585   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:02.358617   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:02.414153   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:02.414200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:02.428429   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:02.428456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:02.497040   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:04.998089   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:05.010890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:05.010947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:05.046483   86402 cri.go:89] found id: ""
	I1104 12:11:05.046513   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.046523   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:05.046534   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:05.046594   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:05.079487   86402 cri.go:89] found id: ""
	I1104 12:11:05.079516   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.079527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:05.079535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:05.079595   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:05.110968   86402 cri.go:89] found id: ""
	I1104 12:11:05.110997   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.111004   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:05.111010   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:05.111057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:05.143372   86402 cri.go:89] found id: ""
	I1104 12:11:05.143398   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.143408   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:05.143415   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:05.143484   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:05.174691   86402 cri.go:89] found id: ""
	I1104 12:11:05.174717   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.174730   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:05.174737   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:05.174802   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:05.210005   86402 cri.go:89] found id: ""
	I1104 12:11:05.210025   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.210033   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:05.210041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:05.210085   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:05.244874   86402 cri.go:89] found id: ""
	I1104 12:11:05.244899   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.244908   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:05.244913   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:05.244956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:05.276517   86402 cri.go:89] found id: ""
	I1104 12:11:05.276547   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.276557   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:05.276568   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:05.276581   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:05.354057   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:05.354087   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:05.390848   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:05.390887   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:05.442659   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:05.442692   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:05.456290   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:05.456315   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:05.530310   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:06.207301   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.208333   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.056314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.556910   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.350241   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.350774   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:10.351274   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.030545   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:08.043598   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:08.043654   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:08.081604   86402 cri.go:89] found id: ""
	I1104 12:11:08.081634   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.081644   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:08.081652   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:08.081712   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:08.135357   86402 cri.go:89] found id: ""
	I1104 12:11:08.135388   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.135398   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:08.135405   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:08.135470   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:08.173275   86402 cri.go:89] found id: ""
	I1104 12:11:08.173298   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.173306   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:08.173311   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:08.173371   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:08.213415   86402 cri.go:89] found id: ""
	I1104 12:11:08.213439   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.213448   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:08.213454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:08.213507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:08.244759   86402 cri.go:89] found id: ""
	I1104 12:11:08.244791   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.244802   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:08.244809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:08.244870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:08.276643   86402 cri.go:89] found id: ""
	I1104 12:11:08.276666   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.276675   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:08.276682   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:08.276751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:08.308425   86402 cri.go:89] found id: ""
	I1104 12:11:08.308451   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.308462   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:08.308469   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:08.308527   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:08.340645   86402 cri.go:89] found id: ""
	I1104 12:11:08.340675   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.340687   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:08.340698   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:08.340712   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:08.413171   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.413196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:08.413214   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:08.496208   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:08.496246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:08.534527   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:08.534560   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:08.583515   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:08.583550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:11.099000   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:11.112158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:11.112236   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:11.145718   86402 cri.go:89] found id: ""
	I1104 12:11:11.145748   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.145758   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:11.145765   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:11.145958   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:11.177270   86402 cri.go:89] found id: ""
	I1104 12:11:11.177301   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.177317   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:11.177325   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:11.177396   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:11.209696   86402 cri.go:89] found id: ""
	I1104 12:11:11.209722   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.209737   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:11.209742   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:11.209789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:11.244034   86402 cri.go:89] found id: ""
	I1104 12:11:11.244061   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.244069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:11.244078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:11.244135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:11.276437   86402 cri.go:89] found id: ""
	I1104 12:11:11.276462   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.276470   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:11.276476   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:11.276530   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:11.308954   86402 cri.go:89] found id: ""
	I1104 12:11:11.308980   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.308988   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:11.308994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:11.309057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:11.342175   86402 cri.go:89] found id: ""
	I1104 12:11:11.342199   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.342207   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:11.342211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:11.342266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:11.374810   86402 cri.go:89] found id: ""
	I1104 12:11:11.374839   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.374851   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:11.374860   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:11.374875   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:11.443638   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:11.443667   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:11.443681   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:11.526996   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:11.527031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:11.568297   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:11.568325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:11.616229   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:11.616264   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:10.707934   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.708053   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:11.055469   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:13.055645   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.057348   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.851253   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.350857   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:14.130707   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:14.143045   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:14.143116   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:14.185422   86402 cri.go:89] found id: ""
	I1104 12:11:14.185461   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.185471   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:14.185477   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:14.185525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:14.219890   86402 cri.go:89] found id: ""
	I1104 12:11:14.219918   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.219928   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:14.219938   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:14.219985   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:14.253256   86402 cri.go:89] found id: ""
	I1104 12:11:14.253286   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.253296   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:14.253304   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:14.253364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:14.286228   86402 cri.go:89] found id: ""
	I1104 12:11:14.286259   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.286271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:14.286279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:14.286342   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:14.317065   86402 cri.go:89] found id: ""
	I1104 12:11:14.317091   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.317101   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:14.317106   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:14.317168   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:14.348540   86402 cri.go:89] found id: ""
	I1104 12:11:14.348575   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.348583   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:14.348589   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:14.348647   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:14.380824   86402 cri.go:89] found id: ""
	I1104 12:11:14.380849   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.380858   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:14.380863   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:14.380924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:14.413757   86402 cri.go:89] found id: ""
	I1104 12:11:14.413785   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.413796   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:14.413806   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:14.413822   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:14.479311   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:14.479336   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:14.479349   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:14.572923   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:14.572959   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:14.620277   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:14.620359   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:14.674276   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:14.674310   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:15.208704   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.708523   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.555941   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.556233   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.351751   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.851087   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.187062   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:17.200179   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:17.200260   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:17.232208   86402 cri.go:89] found id: ""
	I1104 12:11:17.232231   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.232238   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:17.232244   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:17.232298   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:17.266224   86402 cri.go:89] found id: ""
	I1104 12:11:17.266248   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.266257   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:17.266262   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:17.266320   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:17.301909   86402 cri.go:89] found id: ""
	I1104 12:11:17.301940   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.301948   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:17.301953   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:17.302005   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:17.339493   86402 cri.go:89] found id: ""
	I1104 12:11:17.339517   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.339530   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:17.339537   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:17.339600   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:17.373879   86402 cri.go:89] found id: ""
	I1104 12:11:17.373927   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.373938   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:17.373945   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:17.373996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:17.405533   86402 cri.go:89] found id: ""
	I1104 12:11:17.405562   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.405573   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:17.405583   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:17.405645   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:17.439421   86402 cri.go:89] found id: ""
	I1104 12:11:17.439451   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.439460   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:17.439468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:17.439532   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:17.474573   86402 cri.go:89] found id: ""
	I1104 12:11:17.474602   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.474613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:17.474623   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:17.474636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:17.524497   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:17.524536   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.538421   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:17.538460   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:17.607299   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:17.607323   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:17.607337   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:17.684181   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:17.684224   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.223600   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:20.237793   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:20.237865   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:20.279656   86402 cri.go:89] found id: ""
	I1104 12:11:20.279682   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.279693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:20.279700   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:20.279767   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:20.337980   86402 cri.go:89] found id: ""
	I1104 12:11:20.338009   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.338020   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:20.338027   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:20.338087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:20.383183   86402 cri.go:89] found id: ""
	I1104 12:11:20.383217   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.383226   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:20.383231   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:20.383282   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:20.416470   86402 cri.go:89] found id: ""
	I1104 12:11:20.416495   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.416505   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:20.416512   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:20.416570   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:20.451968   86402 cri.go:89] found id: ""
	I1104 12:11:20.452000   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.452011   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:20.452017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:20.452074   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:20.484800   86402 cri.go:89] found id: ""
	I1104 12:11:20.484823   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.484831   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:20.484837   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:20.484893   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:20.516263   86402 cri.go:89] found id: ""
	I1104 12:11:20.516292   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.516300   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:20.516306   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:20.516364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:20.548616   86402 cri.go:89] found id: ""
	I1104 12:11:20.548640   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.548651   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:20.548661   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:20.548674   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:20.599338   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:20.599368   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:20.613116   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:20.613148   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:20.678898   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:20.678924   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:20.678936   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:20.757570   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:20.757606   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.206649   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.207379   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.207579   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.056670   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.555101   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.350891   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.351318   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:23.293912   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:23.307037   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:23.307110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:23.341161   86402 cri.go:89] found id: ""
	I1104 12:11:23.341186   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.341195   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:23.341200   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:23.341277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:23.373462   86402 cri.go:89] found id: ""
	I1104 12:11:23.373491   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.373503   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:23.373510   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:23.373568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:23.404439   86402 cri.go:89] found id: ""
	I1104 12:11:23.404471   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.404482   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:23.404489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:23.404548   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:23.435224   86402 cri.go:89] found id: ""
	I1104 12:11:23.435256   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.435267   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:23.435274   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:23.435336   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:23.472593   86402 cri.go:89] found id: ""
	I1104 12:11:23.472622   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.472633   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:23.472641   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:23.472693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:23.503413   86402 cri.go:89] found id: ""
	I1104 12:11:23.503438   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.503447   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:23.503454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:23.503516   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:23.537582   86402 cri.go:89] found id: ""
	I1104 12:11:23.537610   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.537621   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:23.537628   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:23.537689   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:23.573799   86402 cri.go:89] found id: ""
	I1104 12:11:23.573824   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.573831   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:23.573838   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:23.573851   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:23.649239   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:23.649273   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.686518   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:23.686548   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:23.738955   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:23.738987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:23.751909   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:23.751935   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:23.827244   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.327902   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:26.339708   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:26.339784   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:26.369615   86402 cri.go:89] found id: ""
	I1104 12:11:26.369644   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.369653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:26.369659   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:26.369715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:26.402027   86402 cri.go:89] found id: ""
	I1104 12:11:26.402056   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.402065   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:26.402070   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:26.402123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:26.433483   86402 cri.go:89] found id: ""
	I1104 12:11:26.433512   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.433523   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:26.433529   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:26.433637   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:26.466403   86402 cri.go:89] found id: ""
	I1104 12:11:26.466442   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.466453   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:26.466468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:26.466524   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:26.499818   86402 cri.go:89] found id: ""
	I1104 12:11:26.499853   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.499864   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:26.499871   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:26.499930   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:26.537782   86402 cri.go:89] found id: ""
	I1104 12:11:26.537809   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.537822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:26.537830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:26.537890   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:26.574091   86402 cri.go:89] found id: ""
	I1104 12:11:26.574120   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.574131   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:26.574138   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:26.574199   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:26.607554   86402 cri.go:89] found id: ""
	I1104 12:11:26.607584   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.607596   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:26.607606   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:26.607620   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:26.657405   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:26.657443   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:26.670022   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:26.670046   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:11:26.707958   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.207380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.556568   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:28.557276   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.852761   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.351303   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	W1104 12:11:26.736238   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.736266   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:26.736278   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:26.816277   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:26.816309   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:29.357639   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:29.371116   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:29.371204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:29.405569   86402 cri.go:89] found id: ""
	I1104 12:11:29.405595   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.405604   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:29.405611   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:29.405668   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:29.435669   86402 cri.go:89] found id: ""
	I1104 12:11:29.435697   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.435709   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:29.435716   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:29.435781   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:29.476208   86402 cri.go:89] found id: ""
	I1104 12:11:29.476236   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.476245   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:29.476251   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:29.476305   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:29.511446   86402 cri.go:89] found id: ""
	I1104 12:11:29.511474   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.511483   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:29.511489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:29.511541   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:29.543714   86402 cri.go:89] found id: ""
	I1104 12:11:29.543742   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.543754   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:29.543761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:29.543840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:29.577429   86402 cri.go:89] found id: ""
	I1104 12:11:29.577456   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.577466   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:29.577473   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:29.577534   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:29.608430   86402 cri.go:89] found id: ""
	I1104 12:11:29.608457   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.608475   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:29.608483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:29.608539   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:29.640029   86402 cri.go:89] found id: ""
	I1104 12:11:29.640057   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.640068   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:29.640078   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:29.640092   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:29.691170   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:29.691202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:29.704949   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:29.704987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:29.766856   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:29.766884   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:29.766898   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:29.847487   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:29.847525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:31.208725   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.709593   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:30.557500   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.056569   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:31.851101   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:34.350356   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:32.382925   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:32.395889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:32.395943   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:32.428711   86402 cri.go:89] found id: ""
	I1104 12:11:32.428736   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.428749   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:32.428755   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:32.428810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:32.463269   86402 cri.go:89] found id: ""
	I1104 12:11:32.463295   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.463307   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:32.463313   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:32.463372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:32.496098   86402 cri.go:89] found id: ""
	I1104 12:11:32.496125   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.496135   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:32.496142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:32.496213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:32.528729   86402 cri.go:89] found id: ""
	I1104 12:11:32.528760   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.528771   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:32.528778   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:32.528860   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:32.567290   86402 cri.go:89] found id: ""
	I1104 12:11:32.567321   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.567332   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:32.567338   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:32.567397   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:32.608932   86402 cri.go:89] found id: ""
	I1104 12:11:32.608962   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.608973   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:32.608980   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:32.609037   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:32.641128   86402 cri.go:89] found id: ""
	I1104 12:11:32.641155   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.641164   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:32.641171   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:32.641239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:32.675651   86402 cri.go:89] found id: ""
	I1104 12:11:32.675682   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.675694   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:32.675704   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:32.675719   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:32.742369   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:32.742406   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:32.742419   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:32.823371   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:32.823412   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.862243   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:32.862270   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:32.910961   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:32.910987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.425742   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:35.438553   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:35.438615   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:35.475160   86402 cri.go:89] found id: ""
	I1104 12:11:35.475189   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.475201   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:35.475209   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:35.475267   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:35.517193   86402 cri.go:89] found id: ""
	I1104 12:11:35.517239   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.517252   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:35.517260   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:35.517329   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:35.552941   86402 cri.go:89] found id: ""
	I1104 12:11:35.552967   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.552978   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:35.552985   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:35.553056   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:35.589960   86402 cri.go:89] found id: ""
	I1104 12:11:35.589983   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.589994   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:35.590001   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:35.590063   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:35.624546   86402 cri.go:89] found id: ""
	I1104 12:11:35.624575   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.624587   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:35.624595   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:35.624655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:35.657855   86402 cri.go:89] found id: ""
	I1104 12:11:35.657885   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.657896   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:35.657903   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:35.657957   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:35.691465   86402 cri.go:89] found id: ""
	I1104 12:11:35.691498   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.691509   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:35.691516   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:35.691587   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:35.727520   86402 cri.go:89] found id: ""
	I1104 12:11:35.727548   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.727558   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:35.727569   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:35.727584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:35.777876   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:35.777912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.790790   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:35.790817   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:35.856780   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:35.856805   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:35.856819   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:35.936769   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:35.936812   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:36.207096   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.707776   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:35.556694   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.055778   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:36.850946   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:39.350058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.474827   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:38.488151   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:38.488221   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:38.523010   86402 cri.go:89] found id: ""
	I1104 12:11:38.523042   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.523053   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:38.523061   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:38.523117   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:38.558065   86402 cri.go:89] found id: ""
	I1104 12:11:38.558093   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.558102   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:38.558107   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:38.558153   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:38.590676   86402 cri.go:89] found id: ""
	I1104 12:11:38.590704   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.590715   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:38.590723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:38.590780   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:38.623762   86402 cri.go:89] found id: ""
	I1104 12:11:38.623793   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.623804   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:38.623811   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:38.623870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:38.655918   86402 cri.go:89] found id: ""
	I1104 12:11:38.655947   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.655958   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:38.655966   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:38.656028   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:38.691200   86402 cri.go:89] found id: ""
	I1104 12:11:38.691228   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.691238   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:38.691245   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:38.691302   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:38.724725   86402 cri.go:89] found id: ""
	I1104 12:11:38.724748   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.724756   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:38.724761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:38.724819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:38.756333   86402 cri.go:89] found id: ""
	I1104 12:11:38.756360   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.756370   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:38.756381   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:38.756395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:38.807722   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:38.807756   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:38.821055   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:38.821079   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:38.886629   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:38.886656   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:38.886671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:38.960958   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:38.960999   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:41.503471   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:41.515994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:41.516065   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:41.549936   86402 cri.go:89] found id: ""
	I1104 12:11:41.549960   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.549968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:41.549975   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:41.550033   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:41.584565   86402 cri.go:89] found id: ""
	I1104 12:11:41.584590   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.584602   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:41.584610   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:41.584660   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:41.616427   86402 cri.go:89] found id: ""
	I1104 12:11:41.616450   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.616458   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:41.616463   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:41.616510   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:41.650835   86402 cri.go:89] found id: ""
	I1104 12:11:41.650864   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.650875   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:41.650882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:41.650946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:40.707926   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.207969   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:40.555616   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:42.555839   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:44.556749   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.351131   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.851925   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.685899   86402 cri.go:89] found id: ""
	I1104 12:11:41.685921   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.685928   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:41.685934   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:41.685979   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:41.718730   86402 cri.go:89] found id: ""
	I1104 12:11:41.718757   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.718773   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:41.718782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:41.718837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:41.748843   86402 cri.go:89] found id: ""
	I1104 12:11:41.748875   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.748887   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:41.748895   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:41.748963   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:41.780225   86402 cri.go:89] found id: ""
	I1104 12:11:41.780251   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.780260   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:41.780268   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:41.780285   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:41.830864   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:41.830893   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:41.844252   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:41.844279   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:41.908514   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:41.908542   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:41.908554   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:41.988545   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:41.988582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:44.527641   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:44.540026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:44.540108   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:44.574530   86402 cri.go:89] found id: ""
	I1104 12:11:44.574559   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.574570   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:44.574577   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:44.574638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:44.606073   86402 cri.go:89] found id: ""
	I1104 12:11:44.606103   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.606114   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:44.606121   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:44.606185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:44.639750   86402 cri.go:89] found id: ""
	I1104 12:11:44.639775   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.639784   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:44.639792   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:44.639850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:44.673528   86402 cri.go:89] found id: ""
	I1104 12:11:44.673557   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.673565   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:44.673573   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:44.673625   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:44.705928   86402 cri.go:89] found id: ""
	I1104 12:11:44.705956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.705966   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:44.705973   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:44.706032   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:44.736779   86402 cri.go:89] found id: ""
	I1104 12:11:44.736811   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.736822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:44.736830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:44.736886   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:44.769929   86402 cri.go:89] found id: ""
	I1104 12:11:44.769956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.769964   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:44.769970   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:44.770015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:44.800818   86402 cri.go:89] found id: ""
	I1104 12:11:44.800846   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.800855   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:44.800863   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:44.800873   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:44.853610   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:44.853641   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:44.866656   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:44.866683   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:44.936386   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:44.936412   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:44.936425   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:45.011789   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:45.011823   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:45.707030   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.707464   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.711329   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.557112   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.055647   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.351055   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:48.850134   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:50.851867   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.548672   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:47.563082   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:47.563157   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:47.598722   86402 cri.go:89] found id: ""
	I1104 12:11:47.598748   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.598756   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:47.598762   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:47.598809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:47.633376   86402 cri.go:89] found id: ""
	I1104 12:11:47.633412   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.633421   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:47.633428   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:47.633486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:47.666059   86402 cri.go:89] found id: ""
	I1104 12:11:47.666087   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.666095   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:47.666101   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:47.666147   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:47.700659   86402 cri.go:89] found id: ""
	I1104 12:11:47.700690   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.700704   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:47.700711   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:47.700771   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:47.732901   86402 cri.go:89] found id: ""
	I1104 12:11:47.732927   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.732934   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:47.732940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:47.732984   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:47.765371   86402 cri.go:89] found id: ""
	I1104 12:11:47.765398   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.765418   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:47.765425   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:47.765487   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:47.797043   86402 cri.go:89] found id: ""
	I1104 12:11:47.797077   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.797089   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:47.797096   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:47.797159   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:47.828140   86402 cri.go:89] found id: ""
	I1104 12:11:47.828172   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.828184   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:47.828194   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:47.828208   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:47.911398   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:47.911434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.948042   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:47.948071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:47.999603   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:47.999638   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:48.013818   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:48.013856   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:48.082679   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.583325   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:50.595272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:50.595346   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:50.630857   86402 cri.go:89] found id: ""
	I1104 12:11:50.630883   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.630892   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:50.630899   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:50.630965   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:50.663025   86402 cri.go:89] found id: ""
	I1104 12:11:50.663049   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.663058   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:50.663063   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:50.663109   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:50.695371   86402 cri.go:89] found id: ""
	I1104 12:11:50.695402   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.695413   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:50.695421   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:50.695480   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:50.728805   86402 cri.go:89] found id: ""
	I1104 12:11:50.728827   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.728836   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:50.728841   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:50.728902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:50.762837   86402 cri.go:89] found id: ""
	I1104 12:11:50.762868   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.762878   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:50.762885   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:50.762941   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:50.802531   86402 cri.go:89] found id: ""
	I1104 12:11:50.802556   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.802564   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:50.802569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:50.802613   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:50.835124   86402 cri.go:89] found id: ""
	I1104 12:11:50.835161   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.835173   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:50.835180   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:50.835234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:50.869265   86402 cri.go:89] found id: ""
	I1104 12:11:50.869295   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.869308   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:50.869318   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:50.869330   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:50.919371   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:50.919405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:50.932165   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:50.932195   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:50.993935   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.993959   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:50.993972   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:51.071816   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:51.071848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:52.208101   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:54.707467   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:51.056129   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.057025   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.349902   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.350302   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.608347   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:53.620842   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:53.620902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:53.652870   86402 cri.go:89] found id: ""
	I1104 12:11:53.652896   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.652909   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:53.652917   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:53.652980   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:53.684842   86402 cri.go:89] found id: ""
	I1104 12:11:53.684878   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.684889   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:53.684897   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:53.684956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:53.722505   86402 cri.go:89] found id: ""
	I1104 12:11:53.722531   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.722539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:53.722544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:53.722603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:53.753831   86402 cri.go:89] found id: ""
	I1104 12:11:53.753858   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.753866   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:53.753872   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:53.753918   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:53.786112   86402 cri.go:89] found id: ""
	I1104 12:11:53.786139   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.786150   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:53.786157   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:53.786218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:53.820446   86402 cri.go:89] found id: ""
	I1104 12:11:53.820472   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.820487   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:53.820493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:53.820552   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:53.855631   86402 cri.go:89] found id: ""
	I1104 12:11:53.855655   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.855665   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:53.855673   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:53.855727   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:53.887953   86402 cri.go:89] found id: ""
	I1104 12:11:53.887983   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.887994   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:53.888004   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:53.888023   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:53.954408   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:53.954430   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:53.954442   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:54.028549   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:54.028584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:54.070869   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:54.070895   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:54.123676   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:54.123715   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:56.639480   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:56.652651   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:56.652709   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:56.708211   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.555992   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:58.056271   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:57.350474   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.850830   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:56.689397   86402 cri.go:89] found id: ""
	I1104 12:11:56.689425   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.689443   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:56.689452   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:56.689517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:56.725197   86402 cri.go:89] found id: ""
	I1104 12:11:56.725234   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.725246   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:56.725254   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:56.725308   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:56.759043   86402 cri.go:89] found id: ""
	I1104 12:11:56.759073   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.759084   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:56.759090   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:56.759141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:56.792268   86402 cri.go:89] found id: ""
	I1104 12:11:56.792296   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.792307   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:56.792314   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:56.792375   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:56.823668   86402 cri.go:89] found id: ""
	I1104 12:11:56.823692   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.823702   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:56.823709   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:56.823769   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:56.861812   86402 cri.go:89] found id: ""
	I1104 12:11:56.861837   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.861845   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:56.861851   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:56.861902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:56.894037   86402 cri.go:89] found id: ""
	I1104 12:11:56.894067   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.894075   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:56.894080   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:56.894133   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:56.925603   86402 cri.go:89] found id: ""
	I1104 12:11:56.925634   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.925646   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:56.925656   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:56.925669   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:56.961504   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:56.961530   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:57.012666   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:57.012700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:57.025887   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:57.025921   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:57.097219   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:57.097257   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:57.097272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:59.671179   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:59.684642   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:59.684718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:59.721599   86402 cri.go:89] found id: ""
	I1104 12:11:59.721622   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.721631   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:59.721640   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:59.721693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:59.757423   86402 cri.go:89] found id: ""
	I1104 12:11:59.757453   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.757461   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:59.757466   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:59.757525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:59.794036   86402 cri.go:89] found id: ""
	I1104 12:11:59.794071   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.794081   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:59.794089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:59.794148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:59.830098   86402 cri.go:89] found id: ""
	I1104 12:11:59.830123   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.830134   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:59.830142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:59.830207   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:59.867791   86402 cri.go:89] found id: ""
	I1104 12:11:59.867815   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.867823   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:59.867828   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:59.867879   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:59.903579   86402 cri.go:89] found id: ""
	I1104 12:11:59.903607   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.903614   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:59.903620   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:59.903667   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:59.940955   86402 cri.go:89] found id: ""
	I1104 12:11:59.940977   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.940984   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:59.940989   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:59.941034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:59.977626   86402 cri.go:89] found id: ""
	I1104 12:11:59.977653   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.977663   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:59.977674   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:59.977687   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:00.032280   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:00.032312   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:00.045965   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:00.045991   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:00.123578   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:00.123608   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:00.123625   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:00.208309   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:00.208340   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:01.707661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.207140   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:00.555683   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:02.555797   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.556558   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851646   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851680   85759 pod_ready.go:82] duration metric: took 4m0.007179751s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:01.851691   85759 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:01.851701   85759 pod_ready.go:39] duration metric: took 4m4.052369029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:01.851721   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:01.851752   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:01.851805   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:01.891029   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:01.891056   85759 cri.go:89] found id: ""
	I1104 12:12:01.891066   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:01.891128   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.895134   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:01.895243   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:01.928058   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:01.928081   85759 cri.go:89] found id: ""
	I1104 12:12:01.928089   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:01.928134   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.931906   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:01.931974   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:01.972023   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:01.972052   85759 cri.go:89] found id: ""
	I1104 12:12:01.972062   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:01.972116   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.980811   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:01.980894   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.024013   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.024038   85759 cri.go:89] found id: ""
	I1104 12:12:02.024046   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:02.024108   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.028571   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.028641   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.063545   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:02.063570   85759 cri.go:89] found id: ""
	I1104 12:12:02.063580   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:02.063635   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.067582   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.067652   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.100954   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.100979   85759 cri.go:89] found id: ""
	I1104 12:12:02.100989   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:02.101038   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.105121   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.105182   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.137206   85759 cri.go:89] found id: ""
	I1104 12:12:02.137249   85759 logs.go:282] 0 containers: []
	W1104 12:12:02.137260   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.137268   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:02.137317   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:02.171499   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:02.171520   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.171526   85759 cri.go:89] found id: ""
	I1104 12:12:02.171535   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:02.171587   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.175646   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.179066   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:02.179084   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.249087   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:02.249126   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:02.262666   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:02.262692   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:02.316826   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:02.316856   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.351741   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:02.351766   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.400377   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:02.400409   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.448029   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:02.448059   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:02.975331   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:02.975367   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:03.026697   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.026739   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:03.140704   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:03.140753   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:03.192394   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:03.192427   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:03.236040   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:03.236071   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:03.275166   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:03.275194   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:05.813333   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.827697   85759 api_server.go:72] duration metric: took 4m15.741105379s to wait for apiserver process to appear ...
	I1104 12:12:05.827725   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:05.827763   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.827826   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.869552   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:05.869580   85759 cri.go:89] found id: ""
	I1104 12:12:05.869590   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:05.869642   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.873890   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.873954   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.914131   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:05.914153   85759 cri.go:89] found id: ""
	I1104 12:12:05.914161   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:05.914216   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.920980   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.921042   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.960930   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:05.960953   85759 cri.go:89] found id: ""
	I1104 12:12:05.960962   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:05.961018   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.965591   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.965653   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:06.000500   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:06.000520   85759 cri.go:89] found id: ""
	I1104 12:12:06.000526   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:06.000576   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.004775   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:06.004835   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:06.042011   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:06.042032   85759 cri.go:89] found id: ""
	I1104 12:12:06.042041   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:06.042102   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.047885   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:06.047952   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.084318   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:06.084341   85759 cri.go:89] found id: ""
	I1104 12:12:06.084349   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:06.084410   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.088487   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.088564   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.127693   85759 cri.go:89] found id: ""
	I1104 12:12:06.127721   85759 logs.go:282] 0 containers: []
	W1104 12:12:06.127730   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.127736   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:06.127780   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:06.165173   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.165199   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.165206   85759 cri.go:89] found id: ""
	I1104 12:12:06.165215   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:06.165302   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.169479   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.173154   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.173177   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.746303   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:02.758892   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:02.758967   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:02.792775   86402 cri.go:89] found id: ""
	I1104 12:12:02.792803   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.792815   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:02.792822   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:02.792878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:02.831073   86402 cri.go:89] found id: ""
	I1104 12:12:02.831097   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.831108   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:02.831115   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:02.831174   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:02.863530   86402 cri.go:89] found id: ""
	I1104 12:12:02.863557   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.863568   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:02.863574   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:02.863641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.894894   86402 cri.go:89] found id: ""
	I1104 12:12:02.894924   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.894934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:02.894942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.894996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.930052   86402 cri.go:89] found id: ""
	I1104 12:12:02.930081   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.930092   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:02.930100   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.930160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.964503   86402 cri.go:89] found id: ""
	I1104 12:12:02.964532   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.964544   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:02.964551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.964610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.998065   86402 cri.go:89] found id: ""
	I1104 12:12:02.998088   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.998096   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.998102   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:02.998148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:03.033579   86402 cri.go:89] found id: ""
	I1104 12:12:03.033604   86402 logs.go:282] 0 containers: []
	W1104 12:12:03.033613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:03.033621   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:03.033630   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:03.086215   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:03.086249   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:03.100100   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.100136   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:03.168116   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:03.168150   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:03.168165   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:03.253608   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:03.253642   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:05.792913   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.806494   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.806568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.854379   86402 cri.go:89] found id: ""
	I1104 12:12:05.854406   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.854417   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:05.854425   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.854503   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.886144   86402 cri.go:89] found id: ""
	I1104 12:12:05.886169   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.886179   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:05.886186   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.886248   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.917462   86402 cri.go:89] found id: ""
	I1104 12:12:05.917482   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.917492   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:05.917499   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.917550   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:05.954065   86402 cri.go:89] found id: ""
	I1104 12:12:05.954099   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.954110   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:05.954120   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:05.954194   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:05.990935   86402 cri.go:89] found id: ""
	I1104 12:12:05.990966   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.990977   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:05.990984   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:05.991050   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.032175   86402 cri.go:89] found id: ""
	I1104 12:12:06.032198   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.032206   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:06.032211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.032269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.069215   86402 cri.go:89] found id: ""
	I1104 12:12:06.069262   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.069275   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.069282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:06.069340   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:06.103065   86402 cri.go:89] found id: ""
	I1104 12:12:06.103106   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.103117   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:06.103127   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.103145   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:06.184111   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:06.184135   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.184149   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.272720   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.272760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.315596   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.315636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:06.376054   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.376110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.214237   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:08.707098   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:07.056531   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:09.056763   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:06.252295   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:06.252326   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:06.302739   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:06.302769   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:06.361279   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.361307   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.811335   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.811380   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.851356   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:06.851387   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.888753   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:06.888789   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.922406   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.922438   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.935028   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.935057   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:07.033975   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:07.034019   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:07.068680   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:07.068706   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:07.105150   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:07.105182   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:07.139258   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:07.139290   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.695630   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:12:09.701156   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:12:09.702414   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:09.702441   85759 api_server.go:131] duration metric: took 3.874707524s to wait for apiserver health ...
	I1104 12:12:09.702451   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:09.702475   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:09.702530   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:09.736867   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:09.736891   85759 cri.go:89] found id: ""
	I1104 12:12:09.736901   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:09.736956   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.741108   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:09.741176   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:09.780460   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:09.780483   85759 cri.go:89] found id: ""
	I1104 12:12:09.780490   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:09.780535   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.784698   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:09.784756   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:09.823042   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:09.823059   85759 cri.go:89] found id: ""
	I1104 12:12:09.823068   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:09.823123   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.826750   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:09.826803   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.859148   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:09.859175   85759 cri.go:89] found id: ""
	I1104 12:12:09.859185   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:09.859245   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.863676   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.863739   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.901737   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:09.901766   85759 cri.go:89] found id: ""
	I1104 12:12:09.901783   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:09.901843   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.905931   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.905993   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.942617   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.942637   85759 cri.go:89] found id: ""
	I1104 12:12:09.942644   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:09.942687   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.946420   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.946481   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.984891   85759 cri.go:89] found id: ""
	I1104 12:12:09.984921   85759 logs.go:282] 0 containers: []
	W1104 12:12:09.984932   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.984939   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:09.985000   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:10.018332   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.018357   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.018363   85759 cri.go:89] found id: ""
	I1104 12:12:10.018374   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:10.018434   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.022995   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.026853   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:10.026878   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:10.083384   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:10.083421   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:10.136576   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:10.136608   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.182808   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:10.182837   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.217017   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:10.217047   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:10.598972   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:10.599010   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:10.638827   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:10.638868   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:10.652880   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:10.652923   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:10.700645   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:10.700675   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:10.734860   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:10.734890   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:10.774613   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:10.774647   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:10.808375   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:10.808403   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:10.876130   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:10.876165   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:08.890463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:08.904272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:08.904354   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:08.935677   86402 cri.go:89] found id: ""
	I1104 12:12:08.935701   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.935710   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:08.935715   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:08.935761   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:08.966969   86402 cri.go:89] found id: ""
	I1104 12:12:08.966993   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.967004   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:08.967011   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:08.967072   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:08.998753   86402 cri.go:89] found id: ""
	I1104 12:12:08.998778   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.998786   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:08.998790   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:08.998852   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.031901   86402 cri.go:89] found id: ""
	I1104 12:12:09.031925   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.031934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:09.031940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.032000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.071478   86402 cri.go:89] found id: ""
	I1104 12:12:09.071500   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.071508   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:09.071513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.071564   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.107593   86402 cri.go:89] found id: ""
	I1104 12:12:09.107621   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.107629   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:09.107635   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.107693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.140899   86402 cri.go:89] found id: ""
	I1104 12:12:09.140923   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.140934   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.140942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:09.141000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:09.174279   86402 cri.go:89] found id: ""
	I1104 12:12:09.174307   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.174318   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:09.174330   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:09.174405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:09.226340   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:09.226371   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:09.239573   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:09.239600   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:09.306180   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:09.306201   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:09.306212   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:09.385039   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:09.385072   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:13.475909   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:13.475946   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.475954   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.475960   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.475965   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.475970   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.475975   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.475985   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.475994   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.476008   85759 system_pods.go:74] duration metric: took 3.773548162s to wait for pod list to return data ...
	I1104 12:12:13.476020   85759 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:13.478598   85759 default_sa.go:45] found service account: "default"
	I1104 12:12:13.478618   85759 default_sa.go:55] duration metric: took 2.591186ms for default service account to be created ...
	I1104 12:12:13.478628   85759 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:13.483285   85759 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:13.483308   85759 system_pods.go:89] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.483314   85759 system_pods.go:89] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.483318   85759 system_pods.go:89] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.483322   85759 system_pods.go:89] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.483325   85759 system_pods.go:89] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.483329   85759 system_pods.go:89] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.483336   85759 system_pods.go:89] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.483340   85759 system_pods.go:89] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.483347   85759 system_pods.go:126] duration metric: took 4.713256ms to wait for k8s-apps to be running ...
	I1104 12:12:13.483355   85759 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:13.483398   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:13.497748   85759 system_svc.go:56] duration metric: took 14.381722ms WaitForService to wait for kubelet
	I1104 12:12:13.497812   85759 kubeadm.go:582] duration metric: took 4m23.411218278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:13.497843   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:13.500813   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:13.500833   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:13.500843   85759 node_conditions.go:105] duration metric: took 2.993771ms to run NodePressure ...
	I1104 12:12:13.500854   85759 start.go:241] waiting for startup goroutines ...
	I1104 12:12:13.500860   85759 start.go:246] waiting for cluster config update ...
	I1104 12:12:13.500870   85759 start.go:255] writing updated cluster config ...
	I1104 12:12:13.501122   85759 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:13.548293   85759 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:13.550203   85759 out.go:177] * Done! kubectl is now configured to use "embed-certs-325116" cluster and "default" namespace by default
	I1104 12:12:10.707746   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:12.708477   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.555266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:13.555498   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.924105   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:11.936623   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:11.936685   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:11.968026   86402 cri.go:89] found id: ""
	I1104 12:12:11.968056   86402 logs.go:282] 0 containers: []
	W1104 12:12:11.968067   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:11.968074   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:11.968139   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:12.001193   86402 cri.go:89] found id: ""
	I1104 12:12:12.001218   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.001245   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:12.001252   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:12.001311   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:12.035167   86402 cri.go:89] found id: ""
	I1104 12:12:12.035190   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.035199   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:12.035204   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:12.035250   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:12.068412   86402 cri.go:89] found id: ""
	I1104 12:12:12.068440   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.068450   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:12.068458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:12.068515   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:12.099965   86402 cri.go:89] found id: ""
	I1104 12:12:12.099991   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.100002   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:12.100009   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:12.100066   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:12.133413   86402 cri.go:89] found id: ""
	I1104 12:12:12.133442   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.133453   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:12.133460   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:12.133520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:12.169007   86402 cri.go:89] found id: ""
	I1104 12:12:12.169036   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.169046   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:12.169053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:12.169112   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:12.200592   86402 cri.go:89] found id: ""
	I1104 12:12:12.200621   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.200635   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:12.200643   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:12.200657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:12.244609   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:12.244644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:12.299770   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:12.299804   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:12.324354   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:12.324395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:12.385605   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:12.385632   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:12.385661   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:14.964867   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:14.977918   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:14.977991   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:15.012865   86402 cri.go:89] found id: ""
	I1104 12:12:15.012894   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.012906   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:15.012913   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:15.012977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:15.046548   86402 cri.go:89] found id: ""
	I1104 12:12:15.046574   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.046583   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:15.046589   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:15.046636   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:15.079310   86402 cri.go:89] found id: ""
	I1104 12:12:15.079336   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.079347   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:15.079353   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:15.079412   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:15.110595   86402 cri.go:89] found id: ""
	I1104 12:12:15.110625   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.110636   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:15.110648   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:15.110716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:15.143362   86402 cri.go:89] found id: ""
	I1104 12:12:15.143391   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.143403   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:15.143410   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:15.143533   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:15.173973   86402 cri.go:89] found id: ""
	I1104 12:12:15.174000   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.174009   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:15.174017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:15.174081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:15.205021   86402 cri.go:89] found id: ""
	I1104 12:12:15.205049   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.205060   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:15.205067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:15.205113   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:15.240190   86402 cri.go:89] found id: ""
	I1104 12:12:15.240220   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.240231   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:15.240249   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:15.240263   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:15.290208   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:15.290241   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:15.305216   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:15.305258   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:15.375713   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:15.375735   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:15.375746   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:15.456517   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:15.456552   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:15.209380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:17.708299   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:16.056359   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:18.556166   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.050834   86301 pod_ready.go:82] duration metric: took 4m0.001048639s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:20.050863   86301 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:20.050874   86301 pod_ready.go:39] duration metric: took 4m5.585310983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:20.050889   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:20.050919   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:20.050968   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:20.088440   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.088466   86301 cri.go:89] found id: ""
	I1104 12:12:20.088476   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:20.088523   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.092502   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:20.092575   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:20.126599   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:20.126621   86301 cri.go:89] found id: ""
	I1104 12:12:20.126629   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:20.126687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.130617   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:20.130686   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:20.169664   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.169687   86301 cri.go:89] found id: ""
	I1104 12:12:20.169696   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:20.169750   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.173881   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:20.173920   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:20.209271   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.209292   86301 cri.go:89] found id: ""
	I1104 12:12:20.209299   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:20.209354   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.214187   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:20.214254   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:20.248683   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.248702   86301 cri.go:89] found id: ""
	I1104 12:12:20.248709   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:20.248757   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.252501   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:20.252574   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:20.286367   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:20.286406   86301 cri.go:89] found id: ""
	I1104 12:12:20.286415   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:20.286491   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:17.992855   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:18.011370   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:18.011446   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:18.054937   86402 cri.go:89] found id: ""
	I1104 12:12:18.054961   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.054968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:18.054974   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:18.055026   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:18.107769   86402 cri.go:89] found id: ""
	I1104 12:12:18.107802   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.107814   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:18.107821   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:18.107887   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:18.141932   86402 cri.go:89] found id: ""
	I1104 12:12:18.141959   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.141968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:18.141974   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:18.142021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:18.174322   86402 cri.go:89] found id: ""
	I1104 12:12:18.174345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.174353   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:18.174361   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:18.174514   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:18.206742   86402 cri.go:89] found id: ""
	I1104 12:12:18.206766   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.206776   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:18.206782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:18.206840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:18.240322   86402 cri.go:89] found id: ""
	I1104 12:12:18.240345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.240358   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:18.240363   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:18.240420   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:18.272081   86402 cri.go:89] found id: ""
	I1104 12:12:18.272110   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.272121   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:18.272128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:18.272211   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:18.308604   86402 cri.go:89] found id: ""
	I1104 12:12:18.308629   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.308637   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:18.308646   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:18.308655   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:18.392854   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:18.392892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:18.429632   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:18.429665   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:18.481082   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:18.481120   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:18.494730   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:18.494758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:18.562098   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.063223   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:21.075655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:21.075714   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:21.117762   86402 cri.go:89] found id: ""
	I1104 12:12:21.117794   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.117807   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:21.117817   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:21.117881   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:21.153256   86402 cri.go:89] found id: ""
	I1104 12:12:21.153281   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.153289   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:21.153295   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:21.153355   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:21.191477   86402 cri.go:89] found id: ""
	I1104 12:12:21.191519   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.191539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:21.191547   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:21.191618   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:21.228378   86402 cri.go:89] found id: ""
	I1104 12:12:21.228411   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.228424   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:21.228431   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:21.228495   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:21.265452   86402 cri.go:89] found id: ""
	I1104 12:12:21.265483   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.265493   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:21.265501   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:21.265561   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:21.301073   86402 cri.go:89] found id: ""
	I1104 12:12:21.301099   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.301108   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:21.301114   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:21.301182   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:21.337952   86402 cri.go:89] found id: ""
	I1104 12:12:21.337977   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.337986   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:21.337996   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:21.338053   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:21.371895   86402 cri.go:89] found id: ""
	I1104 12:12:21.371920   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.371929   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:21.371937   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:21.371950   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:21.429757   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:21.429789   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:21.444365   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.444418   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:21.510971   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.510990   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:21.511002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.593605   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:21.593639   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.208004   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:22.706901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:24.708795   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.290832   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:20.290885   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:20.324359   86301 cri.go:89] found id: ""
	I1104 12:12:20.324383   86301 logs.go:282] 0 containers: []
	W1104 12:12:20.324391   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:20.324397   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:20.324442   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:20.364466   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.364488   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:20.364492   86301 cri.go:89] found id: ""
	I1104 12:12:20.364500   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:20.364557   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.368440   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.371967   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:20.371991   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.405547   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:20.405572   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.446936   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:20.446962   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.485811   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:20.485838   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.530775   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:20.530803   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:20.599495   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:20.599542   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:20.614511   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:20.614543   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.659277   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:20.659316   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.694675   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:20.694707   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.187670   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.187705   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:21.308477   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:21.308501   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:21.365526   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:21.365562   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:21.431350   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:21.431381   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:23.969966   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:23.984866   86301 api_server.go:72] duration metric: took 4m16.75797908s to wait for apiserver process to appear ...
	I1104 12:12:23.984895   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:23.984937   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:23.984989   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:24.022326   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.022348   86301 cri.go:89] found id: ""
	I1104 12:12:24.022357   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:24.022428   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.027288   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:24.027377   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:24.064963   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.064986   86301 cri.go:89] found id: ""
	I1104 12:12:24.064993   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:24.065045   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.072027   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:24.072089   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:24.106618   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.106648   86301 cri.go:89] found id: ""
	I1104 12:12:24.106659   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:24.106719   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.110696   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:24.110762   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:24.148575   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:24.148600   86301 cri.go:89] found id: ""
	I1104 12:12:24.148621   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:24.148687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.152673   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:24.152741   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:24.187739   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:24.187763   86301 cri.go:89] found id: ""
	I1104 12:12:24.187771   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:24.187817   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.191551   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:24.191610   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:24.229634   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.229656   86301 cri.go:89] found id: ""
	I1104 12:12:24.229667   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:24.229720   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.234342   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:24.234426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:24.268339   86301 cri.go:89] found id: ""
	I1104 12:12:24.268363   86301 logs.go:282] 0 containers: []
	W1104 12:12:24.268370   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:24.268375   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:24.268426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:24.302347   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.302369   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.302374   86301 cri.go:89] found id: ""
	I1104 12:12:24.302382   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:24.302446   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.306761   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.310867   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:24.310888   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.353396   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:24.353421   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.408025   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:24.408054   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.446150   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:24.446177   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:24.495479   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:24.495505   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:24.568973   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:24.569008   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:24.585522   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:24.585552   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.630483   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:24.630516   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.675828   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:24.675865   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:25.094412   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:25.094457   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:25.191547   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:25.191576   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:25.227482   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:25.227509   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:25.261150   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:25.261184   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.130961   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:24.143387   86402 kubeadm.go:597] duration metric: took 4m4.25221988s to restartPrimaryControlPlane
	W1104 12:12:24.143472   86402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1104 12:12:24.143499   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:12:27.207964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:29.208705   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:27.799329   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:12:27.803543   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:12:27.804545   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:27.804568   86301 api_server.go:131] duration metric: took 3.819666619s to wait for apiserver health ...
	I1104 12:12:27.804576   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:27.804596   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:27.804639   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:27.842317   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:27.842339   86301 cri.go:89] found id: ""
	I1104 12:12:27.842348   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:27.842403   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.846107   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:27.846167   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:27.878833   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:27.878854   86301 cri.go:89] found id: ""
	I1104 12:12:27.878864   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:27.878923   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.882562   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:27.882614   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:27.914077   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:27.914098   86301 cri.go:89] found id: ""
	I1104 12:12:27.914106   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:27.914150   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.917756   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:27.917807   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:27.949534   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:27.949555   86301 cri.go:89] found id: ""
	I1104 12:12:27.949562   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:27.949606   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.953176   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:27.953235   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:27.984491   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:27.984509   86301 cri.go:89] found id: ""
	I1104 12:12:27.984516   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:27.984566   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.988283   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:27.988342   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:28.022752   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.022775   86301 cri.go:89] found id: ""
	I1104 12:12:28.022783   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:28.022829   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.026702   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:28.026767   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:28.062501   86301 cri.go:89] found id: ""
	I1104 12:12:28.062534   86301 logs.go:282] 0 containers: []
	W1104 12:12:28.062545   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:28.062556   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:28.062608   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:28.097167   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.097195   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.097201   86301 cri.go:89] found id: ""
	I1104 12:12:28.097211   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:28.097276   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.101192   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.104712   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:28.104731   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:28.118886   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:28.118911   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:28.220480   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:28.220512   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:28.264205   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:28.264239   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:28.299241   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:28.299274   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:28.339817   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:28.339847   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:28.377987   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:28.378014   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:28.416746   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:28.416772   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:28.484743   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:28.484777   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:28.532089   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:28.532128   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.589039   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:28.589072   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.623955   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:28.623987   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.657953   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:28.657986   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:31.547595   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:31.547624   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.547629   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.547633   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.547637   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.547640   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.547643   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.547649   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.547653   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.547661   86301 system_pods.go:74] duration metric: took 3.743079115s to wait for pod list to return data ...
	I1104 12:12:31.547667   86301 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:31.550088   86301 default_sa.go:45] found service account: "default"
	I1104 12:12:31.550108   86301 default_sa.go:55] duration metric: took 2.435317ms for default service account to be created ...
	I1104 12:12:31.550114   86301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:31.554898   86301 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:31.554924   86301 system_pods.go:89] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.554929   86301 system_pods.go:89] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.554933   86301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.554937   86301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.554941   86301 system_pods.go:89] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.554945   86301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.554952   86301 system_pods.go:89] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.554955   86301 system_pods.go:89] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.554962   86301 system_pods.go:126] duration metric: took 4.842911ms to wait for k8s-apps to be running ...
	I1104 12:12:31.554968   86301 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:31.555008   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:31.568927   86301 system_svc.go:56] duration metric: took 13.948557ms WaitForService to wait for kubelet
	I1104 12:12:31.568958   86301 kubeadm.go:582] duration metric: took 4m24.342075873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:31.568987   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:31.571962   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:31.571983   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:31.571993   86301 node_conditions.go:105] duration metric: took 3.000591ms to run NodePressure ...
	I1104 12:12:31.572004   86301 start.go:241] waiting for startup goroutines ...
	I1104 12:12:31.572010   86301 start.go:246] waiting for cluster config update ...
	I1104 12:12:31.572019   86301 start.go:255] writing updated cluster config ...
	I1104 12:12:31.572277   86301 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:31.620935   86301 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:31.623672   86301 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-036892" cluster and "default" namespace by default
	I1104 12:12:28.876306   86402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.732783523s)
	I1104 12:12:28.876377   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:28.890455   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:12:28.899660   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:12:28.908658   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:12:28.908675   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:12:28.908715   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:12:28.916955   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:12:28.917013   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:12:28.927198   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:12:28.936868   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:12:28.936924   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:12:28.947246   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.956962   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:12:28.957015   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.967293   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:12:28.976975   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:12:28.977030   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:12:28.988547   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:12:29.198333   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:12:31.709511   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:34.207341   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:36.707962   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:39.208138   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:41.208806   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:43.707896   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:46.207316   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:48.707107   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:50.707644   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:52.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:54.708517   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:57.206564   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:59.207122   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:01.207195   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:03.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:05.707763   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:07.708314   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:09.708374   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:10.702085   85500 pod_ready.go:82] duration metric: took 4m0.000587313s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:13:10.702115   85500 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:13:10.702126   85500 pod_ready.go:39] duration metric: took 4m5.542549912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:13:10.702144   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:13:10.702191   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:10.702246   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:10.743079   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:10.743102   85500 cri.go:89] found id: ""
	I1104 12:13:10.743110   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:10.743176   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.747213   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:10.747275   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:10.781435   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:10.781465   85500 cri.go:89] found id: ""
	I1104 12:13:10.781474   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:10.781597   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.785383   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:10.785453   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:10.825927   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:10.825956   85500 cri.go:89] found id: ""
	I1104 12:13:10.825965   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:10.826023   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.829834   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:10.829899   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:10.872447   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:10.872468   85500 cri.go:89] found id: ""
	I1104 12:13:10.872475   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:10.872524   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.876428   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:10.876483   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:10.911092   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:10.911125   85500 cri.go:89] found id: ""
	I1104 12:13:10.911134   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:10.911190   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.915021   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:10.915076   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:10.950838   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:10.950863   85500 cri.go:89] found id: ""
	I1104 12:13:10.950873   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:10.950935   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.954889   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:10.954938   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:10.991580   85500 cri.go:89] found id: ""
	I1104 12:13:10.991609   85500 logs.go:282] 0 containers: []
	W1104 12:13:10.991618   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:10.991625   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:10.991689   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:11.031428   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.031469   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.031474   85500 cri.go:89] found id: ""
	I1104 12:13:11.031484   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:11.031557   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.035810   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.039555   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:11.039582   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:11.076837   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:11.076865   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:11.114534   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:11.114561   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:11.148897   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:11.148935   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.184480   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:11.184511   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:11.256197   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:11.256237   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:11.368984   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:11.369014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:11.414219   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:11.414253   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:11.455746   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:11.455776   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.491699   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:11.491726   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:11.962368   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:11.962400   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:11.975564   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:11.975590   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:12.031427   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:12.031461   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:14.572933   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:13:14.588140   85500 api_server.go:72] duration metric: took 4m17.141131339s to wait for apiserver process to appear ...
	I1104 12:13:14.588168   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:13:14.588196   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:14.588243   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:14.621509   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:14.621534   85500 cri.go:89] found id: ""
	I1104 12:13:14.621543   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:14.621601   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.626328   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:14.626384   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:14.662052   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:14.662079   85500 cri.go:89] found id: ""
	I1104 12:13:14.662115   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:14.662174   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.666018   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:14.666089   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:14.702872   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:14.702897   85500 cri.go:89] found id: ""
	I1104 12:13:14.702910   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:14.702968   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.706809   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:14.706883   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:14.744985   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:14.745005   85500 cri.go:89] found id: ""
	I1104 12:13:14.745012   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:14.745058   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.749441   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:14.749497   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:14.781617   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:14.781644   85500 cri.go:89] found id: ""
	I1104 12:13:14.781653   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:14.781709   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.785971   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:14.786046   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:14.819002   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:14.819029   85500 cri.go:89] found id: ""
	I1104 12:13:14.819038   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:14.819101   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.823075   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:14.823143   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:14.858936   85500 cri.go:89] found id: ""
	I1104 12:13:14.858965   85500 logs.go:282] 0 containers: []
	W1104 12:13:14.858977   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:14.858984   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:14.859048   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:14.898303   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:14.898327   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:14.898333   85500 cri.go:89] found id: ""
	I1104 12:13:14.898341   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:14.898402   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.902325   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.905855   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:14.905880   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:14.973356   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:14.973389   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:14.988655   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:14.988696   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:15.023407   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:15.023443   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:15.078974   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:15.079007   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:15.114147   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:15.114180   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:15.559434   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:15.559477   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:15.666481   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:15.666509   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:15.728066   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:15.728101   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:15.769721   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:15.769759   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:15.802131   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:15.802170   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:15.837613   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:15.837639   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:15.874374   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:15.874407   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:18.413199   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:13:18.418522   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:13:18.419487   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:13:18.419512   85500 api_server.go:131] duration metric: took 3.831337085s to wait for apiserver health ...
	I1104 12:13:18.419521   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:13:18.419549   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:18.419605   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:18.453835   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:18.453856   85500 cri.go:89] found id: ""
	I1104 12:13:18.453865   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:18.453927   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.458136   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:18.458198   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:18.496587   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:18.496623   85500 cri.go:89] found id: ""
	I1104 12:13:18.496634   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:18.496691   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.500451   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:18.500523   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:18.532756   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:18.532785   85500 cri.go:89] found id: ""
	I1104 12:13:18.532795   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:18.532857   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.537239   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:18.537293   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:18.569348   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:18.569374   85500 cri.go:89] found id: ""
	I1104 12:13:18.569382   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:18.569440   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.573491   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:18.573563   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:18.606857   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:18.606886   85500 cri.go:89] found id: ""
	I1104 12:13:18.606896   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:18.606951   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.611158   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:18.611229   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:18.645448   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:18.645467   85500 cri.go:89] found id: ""
	I1104 12:13:18.645474   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:18.645527   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.649014   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:18.649062   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:18.693641   85500 cri.go:89] found id: ""
	I1104 12:13:18.693668   85500 logs.go:282] 0 containers: []
	W1104 12:13:18.693676   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:18.693681   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:18.693728   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:18.733668   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:18.733690   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:18.733695   85500 cri.go:89] found id: ""
	I1104 12:13:18.733702   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:18.733745   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.737419   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.740993   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:18.741014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:19.135942   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:19.135980   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:19.206586   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:19.206623   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:19.222135   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:19.222164   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:19.262746   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:19.262774   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:19.298259   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:19.298287   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:19.338304   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:19.338332   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:19.375163   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:19.375195   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:19.478206   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:19.478234   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:19.526261   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:19.526291   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:19.559922   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:19.559954   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:19.609848   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:19.609879   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:19.648804   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:19.648829   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:22.210690   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:13:22.210718   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.210723   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.210727   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.210730   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.210733   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.210737   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.210752   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.210758   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.210768   85500 system_pods.go:74] duration metric: took 3.791240483s to wait for pod list to return data ...
	I1104 12:13:22.210780   85500 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:13:22.213688   85500 default_sa.go:45] found service account: "default"
	I1104 12:13:22.213709   85500 default_sa.go:55] duration metric: took 2.921691ms for default service account to be created ...
	I1104 12:13:22.213717   85500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:13:22.219436   85500 system_pods.go:86] 8 kube-system pods found
	I1104 12:13:22.219466   85500 system_pods.go:89] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.219475   85500 system_pods.go:89] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.219480   85500 system_pods.go:89] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.219489   85500 system_pods.go:89] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.219495   85500 system_pods.go:89] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.219501   85500 system_pods.go:89] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.219512   85500 system_pods.go:89] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.219523   85500 system_pods.go:89] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.219537   85500 system_pods.go:126] duration metric: took 5.813462ms to wait for k8s-apps to be running ...
	I1104 12:13:22.219551   85500 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:13:22.219612   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:13:22.232887   85500 system_svc.go:56] duration metric: took 13.328078ms WaitForService to wait for kubelet
	I1104 12:13:22.232918   85500 kubeadm.go:582] duration metric: took 4m24.785911082s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:13:22.232941   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:13:22.235641   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:13:22.235662   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:13:22.235675   85500 node_conditions.go:105] duration metric: took 2.728232ms to run NodePressure ...
	I1104 12:13:22.235687   85500 start.go:241] waiting for startup goroutines ...
	I1104 12:13:22.235695   85500 start.go:246] waiting for cluster config update ...
	I1104 12:13:22.235707   85500 start.go:255] writing updated cluster config ...
	I1104 12:13:22.235962   85500 ssh_runner.go:195] Run: rm -f paused
	I1104 12:13:22.284583   85500 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:13:22.287448   85500 out.go:177] * Done! kubectl is now configured to use "no-preload-908370" cluster and "default" namespace by default
	I1104 12:14:25.090113   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:14:25.090254   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:14:25.091997   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.092065   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.092204   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.092341   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.092480   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:25.092569   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:25.094485   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:25.094582   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:25.094664   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:25.094799   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:25.094891   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:25.095003   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:25.095086   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:25.095186   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:25.095240   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:25.095319   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:25.095403   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:25.095481   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:25.095554   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:25.095614   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:25.095676   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:25.095752   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:25.095828   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:25.095970   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:25.096102   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:25.096169   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:25.096262   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:25.097799   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:25.097920   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:25.098018   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:25.098126   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:25.098211   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:25.098333   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:14:25.098393   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:14:25.098487   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098633   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.098690   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098940   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099074   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099307   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099370   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099528   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099582   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099740   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099758   86402 kubeadm.go:310] 
	I1104 12:14:25.099815   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:14:25.099880   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:14:25.099889   86402 kubeadm.go:310] 
	I1104 12:14:25.099923   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:14:25.099952   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:14:25.100036   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:14:25.100044   86402 kubeadm.go:310] 
	I1104 12:14:25.100197   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:14:25.100237   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:14:25.100267   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:14:25.100273   86402 kubeadm.go:310] 
	I1104 12:14:25.100367   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:14:25.100454   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:14:25.100468   86402 kubeadm.go:310] 
	I1104 12:14:25.100600   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:14:25.100718   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:14:25.100821   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:14:25.100903   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:14:25.100970   86402 kubeadm.go:310] 
	W1104 12:14:25.101033   86402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:14:25.101071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:14:25.536184   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:14:25.550453   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:14:25.560308   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:14:25.560327   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:14:25.560368   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:14:25.569106   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:14:25.569189   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:14:25.578395   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:14:25.587402   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:14:25.587473   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:14:25.596827   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.605359   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:14:25.605420   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.614266   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:14:25.622522   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:14:25.622582   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:14:25.631876   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:14:25.701080   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.701168   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.833997   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.834138   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.834258   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:26.009165   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:26.011976   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:26.012090   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:26.012183   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:26.012333   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:26.012422   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:26.012532   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:26.012619   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:26.012689   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:26.012748   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:26.012851   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:26.012978   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:26.013025   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:26.013102   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:26.399153   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:26.470449   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:27.078991   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:27.181622   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:27.205149   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:27.205300   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:27.205383   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:27.355614   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:27.357678   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:27.357840   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:27.363942   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:27.365004   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:27.367237   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:27.368087   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:15:07.369845   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:15:07.370222   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:07.370464   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:12.370802   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:12.371041   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:22.371417   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:22.371584   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:42.371725   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:42.371932   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.370871   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:16:22.371150   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.371181   86402 kubeadm.go:310] 
	I1104 12:16:22.371222   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:16:22.371297   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:16:22.371309   86402 kubeadm.go:310] 
	I1104 12:16:22.371371   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:16:22.371435   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:16:22.371576   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:16:22.371588   86402 kubeadm.go:310] 
	I1104 12:16:22.371726   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:16:22.371784   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:16:22.371863   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:16:22.371879   86402 kubeadm.go:310] 
	I1104 12:16:22.372004   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:16:22.372155   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:16:22.372172   86402 kubeadm.go:310] 
	I1104 12:16:22.372338   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:16:22.372435   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:16:22.372566   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:16:22.372680   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:16:22.372718   86402 kubeadm.go:310] 
	I1104 12:16:22.372948   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:16:22.373110   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:16:22.373289   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:16:22.373328   86402 kubeadm.go:394] duration metric: took 8m2.53443537s to StartCluster
	I1104 12:16:22.373379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:16:22.373431   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:16:22.410373   86402 cri.go:89] found id: ""
	I1104 12:16:22.410409   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.410418   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:16:22.410424   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:16:22.410485   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:16:22.447939   86402 cri.go:89] found id: ""
	I1104 12:16:22.447963   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.447971   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:16:22.447977   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:16:22.448021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:16:22.479234   86402 cri.go:89] found id: ""
	I1104 12:16:22.479263   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.479274   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:16:22.479280   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:16:22.479341   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:16:22.512783   86402 cri.go:89] found id: ""
	I1104 12:16:22.512814   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.512825   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:16:22.512832   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:16:22.512895   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:16:22.549483   86402 cri.go:89] found id: ""
	I1104 12:16:22.549510   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.549520   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:16:22.549527   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:16:22.549593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:16:22.582339   86402 cri.go:89] found id: ""
	I1104 12:16:22.582382   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.582393   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:16:22.582402   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:16:22.582471   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:16:22.613545   86402 cri.go:89] found id: ""
	I1104 12:16:22.613574   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.613585   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:16:22.613593   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:16:22.613656   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:16:22.644488   86402 cri.go:89] found id: ""
	I1104 12:16:22.644517   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.644528   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:16:22.644539   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:16:22.644551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:16:22.681138   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:16:22.681169   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:16:22.734551   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:16:22.734586   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:16:22.750140   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:16:22.750178   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:16:22.837631   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:16:22.837657   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:16:22.837673   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1104 12:16:22.961154   86402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:16:22.961221   86402 out.go:270] * 
	W1104 12:16:22.961295   86402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.961310   86402 out.go:270] * 
	W1104 12:16:22.962053   86402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:16:22.965021   86402 out.go:201] 
	W1104 12:16:22.966262   86402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.966326   86402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:16:22.966377   86402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:16:22.967953   86402 out.go:201] 
	
	
	==> CRI-O <==
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.221837088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723128221817318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b62fcff-4e6d-4d6e-86bf-e3b429e26e60 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.222370907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16b144b9-7d50-4da7-a0d8-c843756a9905 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.222419528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16b144b9-7d50-4da7-a0d8-c843756a9905 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.222448716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=16b144b9-7d50-4da7-a0d8-c843756a9905 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.251006986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=290084ec-91b1-4cc7-b871-c24f00fb6259 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.251084342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=290084ec-91b1-4cc7-b871-c24f00fb6259 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.252208553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6a5d09f-450f-4d29-be55-3d93cf3f1c65 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.252586316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723128252564041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6a5d09f-450f-4d29-be55-3d93cf3f1c65 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.253234822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=348a63b1-eddc-4eed-bbbb-8250161e6f73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.253322883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=348a63b1-eddc-4eed-bbbb-8250161e6f73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.253356378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=348a63b1-eddc-4eed-bbbb-8250161e6f73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.282189228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f41d9814-8f3a-4155-85ad-a1acba4d2c97 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.282256829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f41d9814-8f3a-4155-85ad-a1acba4d2c97 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.283139240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=774a202d-1f8d-4c8a-a2af-fe1fabc77714 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.283508750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723128283489722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=774a202d-1f8d-4c8a-a2af-fe1fabc77714 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.283979359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a76a50f-3094-4249-a54e-12cf0c1cada8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.284024774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a76a50f-3094-4249-a54e-12cf0c1cada8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.284053568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a76a50f-3094-4249-a54e-12cf0c1cada8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.313810380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa889c0c-39a5-4029-878c-2ba766861711 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.313879662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa889c0c-39a5-4029-878c-2ba766861711 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.315012233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c30a9497-d248-45ad-b705-c7a6c4e6de15 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.315467928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723128315445257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c30a9497-d248-45ad-b705-c7a6c4e6de15 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.315984554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16d9e903-5fc5-436f-be60-dc646bdadd82 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.316042739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16d9e903-5fc5-436f-be60-dc646bdadd82 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:25:28 old-k8s-version-589257 crio[626]: time="2024-11-04 12:25:28.316073417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=16d9e903-5fc5-436f-be60-dc646bdadd82 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 4 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051714] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov 4 12:08] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.909177] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.435497] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.440051] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.115131] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.206664] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.118752] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.257608] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +6.231117] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.063384] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.883713] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[ +13.758834] kauditd_printk_skb: 46 callbacks suppressed
	[Nov 4 12:12] systemd-fstab-generator[5108]: Ignoring "noauto" option for root device
	[Nov 4 12:14] systemd-fstab-generator[5387]: Ignoring "noauto" option for root device
	[  +0.067248] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:25:28 up 17 min,  0 users,  load average: 0.03, 0.02, 0.00
	Linux old-k8s-version-589257 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: net.(*sysDialer).dialSerial(0xc00097b800, 0x4f7fe40, 0xc000ae3200, 0xc000aa9740, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /usr/local/go/src/net/dial.go:548 +0x152
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: net.(*Dialer).DialContext(0xc0001e4120, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000abc930, 0x24, 0x0, 0x0, 0x0, ...)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000a86480, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000abc930, 0x24, 0x60, 0x7fdfc84dd6f8, 0x118, ...)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: net/http.(*Transport).dial(0xc00025b2c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000abc930, 0x24, 0x0, 0x0, 0x0, ...)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: net/http.(*Transport).dialConn(0xc00025b2c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc000ad2a80, 0x5, 0xc000abc930, 0x24, 0x0, 0xc000aa4ea0, ...)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: net/http.(*Transport).dialConnFor(0xc00025b2c0, 0xc000a84580)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: created by net/http.(*Transport).queueForDial
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: goroutine 166 [select]:
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000aaf4a0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000ae3620, 0x0, 0x0)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00002d180)
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Nov 04 12:25:28 old-k8s-version-589257 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Nov 04 12:25:28 old-k8s-version-589257 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 04 12:25:28 old-k8s-version-589257 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (232.151605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-589257" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (451.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-325116 -n embed-certs-325116
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-11-04 12:28:47.396154899 +0000 UTC m=+6718.751213687
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-325116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-325116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.104µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-325116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-325116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-325116 logs -n 25: (1.096221009s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC | 04 Nov 24 12:28 UTC |
	| start   | -p newest-cni-374564 --memory=2200 --alsologtostderr   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC | 04 Nov 24 12:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:28:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:28:26.031348   93099 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:28:26.031583   93099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:28:26.031592   93099 out.go:358] Setting ErrFile to fd 2...
	I1104 12:28:26.031596   93099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:28:26.031816   93099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:28:26.032390   93099 out.go:352] Setting JSON to false
	I1104 12:28:26.033458   93099 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11457,"bootTime":1730711849,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:28:26.033558   93099 start.go:139] virtualization: kvm guest
	I1104 12:28:26.035932   93099 out.go:177] * [newest-cni-374564] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:28:26.037153   93099 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:28:26.037149   93099 notify.go:220] Checking for updates...
	I1104 12:28:26.039600   93099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:28:26.040638   93099 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:28:26.041742   93099 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:28:26.042804   93099 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:28:26.043976   93099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:28:26.045588   93099 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:28:26.045710   93099 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:28:26.045845   93099 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:28:26.045960   93099 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:28:26.084159   93099 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 12:28:26.085469   93099 start.go:297] selected driver: kvm2
	I1104 12:28:26.085485   93099 start.go:901] validating driver "kvm2" against <nil>
	I1104 12:28:26.085500   93099 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:28:26.086349   93099 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:28:26.086436   93099 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:28:26.104641   93099 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:28:26.104690   93099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1104 12:28:26.104728   93099 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1104 12:28:26.104996   93099 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1104 12:28:26.105035   93099 cni.go:84] Creating CNI manager for ""
	I1104 12:28:26.105067   93099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:28:26.105076   93099 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 12:28:26.105130   93099 start.go:340] cluster config:
	{Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:28:26.105247   93099 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:28:26.107718   93099 out.go:177] * Starting "newest-cni-374564" primary control-plane node in "newest-cni-374564" cluster
	I1104 12:28:26.108868   93099 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:28:26.108906   93099 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:28:26.108913   93099 cache.go:56] Caching tarball of preloaded images
	I1104 12:28:26.108980   93099 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:28:26.108991   93099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:28:26.109069   93099 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/config.json ...
	I1104 12:28:26.109085   93099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/config.json: {Name:mke6f417518eaaf58f73c80ff80519f51eb2dc8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:28:26.109274   93099 start.go:360] acquireMachinesLock for newest-cni-374564: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:28:26.109324   93099 start.go:364] duration metric: took 27.554µs to acquireMachinesLock for "newest-cni-374564"
	I1104 12:28:26.109349   93099 start.go:93] Provisioning new machine with config: &{Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:28:26.109443   93099 start.go:125] createHost starting for "" (driver="kvm2")
	I1104 12:28:26.111061   93099 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1104 12:28:26.111177   93099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:28:26.111204   93099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:28:26.127675   93099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I1104 12:28:26.128216   93099 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:28:26.128780   93099 main.go:141] libmachine: Using API Version  1
	I1104 12:28:26.128803   93099 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:28:26.129155   93099 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:28:26.129354   93099 main.go:141] libmachine: (newest-cni-374564) Calling .GetMachineName
	I1104 12:28:26.129507   93099 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:28:26.129647   93099 start.go:159] libmachine.API.Create for "newest-cni-374564" (driver="kvm2")
	I1104 12:28:26.129682   93099 client.go:168] LocalClient.Create starting
	I1104 12:28:26.129715   93099 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem
	I1104 12:28:26.129751   93099 main.go:141] libmachine: Decoding PEM data...
	I1104 12:28:26.129770   93099 main.go:141] libmachine: Parsing certificate...
	I1104 12:28:26.129828   93099 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem
	I1104 12:28:26.129856   93099 main.go:141] libmachine: Decoding PEM data...
	I1104 12:28:26.129871   93099 main.go:141] libmachine: Parsing certificate...
	I1104 12:28:26.129893   93099 main.go:141] libmachine: Running pre-create checks...
	I1104 12:28:26.129904   93099 main.go:141] libmachine: (newest-cni-374564) Calling .PreCreateCheck
	I1104 12:28:26.130267   93099 main.go:141] libmachine: (newest-cni-374564) Calling .GetConfigRaw
	I1104 12:28:26.130685   93099 main.go:141] libmachine: Creating machine...
	I1104 12:28:26.130703   93099 main.go:141] libmachine: (newest-cni-374564) Calling .Create
	I1104 12:28:26.130831   93099 main.go:141] libmachine: (newest-cni-374564) Creating KVM machine...
	I1104 12:28:26.132104   93099 main.go:141] libmachine: (newest-cni-374564) DBG | found existing default KVM network
	I1104 12:28:26.133686   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:26.133533   93122 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1b:56:a9} reservation:<nil>}
	I1104 12:28:26.134731   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:26.134667   93122 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000348010}
	I1104 12:28:26.134769   93099 main.go:141] libmachine: (newest-cni-374564) DBG | created network xml: 
	I1104 12:28:26.134781   93099 main.go:141] libmachine: (newest-cni-374564) DBG | <network>
	I1104 12:28:26.134832   93099 main.go:141] libmachine: (newest-cni-374564) DBG |   <name>mk-newest-cni-374564</name>
	I1104 12:28:26.134858   93099 main.go:141] libmachine: (newest-cni-374564) DBG |   <dns enable='no'/>
	I1104 12:28:26.134876   93099 main.go:141] libmachine: (newest-cni-374564) DBG |   
	I1104 12:28:26.134890   93099 main.go:141] libmachine: (newest-cni-374564) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1104 12:28:26.134917   93099 main.go:141] libmachine: (newest-cni-374564) DBG |     <dhcp>
	I1104 12:28:26.134946   93099 main.go:141] libmachine: (newest-cni-374564) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1104 12:28:26.134962   93099 main.go:141] libmachine: (newest-cni-374564) DBG |     </dhcp>
	I1104 12:28:26.134973   93099 main.go:141] libmachine: (newest-cni-374564) DBG |   </ip>
	I1104 12:28:26.134981   93099 main.go:141] libmachine: (newest-cni-374564) DBG |   
	I1104 12:28:26.134992   93099 main.go:141] libmachine: (newest-cni-374564) DBG | </network>
	I1104 12:28:26.135001   93099 main.go:141] libmachine: (newest-cni-374564) DBG | 
	I1104 12:28:26.140456   93099 main.go:141] libmachine: (newest-cni-374564) DBG | trying to create private KVM network mk-newest-cni-374564 192.168.50.0/24...
	I1104 12:28:26.211953   93099 main.go:141] libmachine: (newest-cni-374564) DBG | private KVM network mk-newest-cni-374564 192.168.50.0/24 created
	I1104 12:28:26.211982   93099 main.go:141] libmachine: (newest-cni-374564) Setting up store path in /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564 ...
	I1104 12:28:26.211997   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:26.211938   93122 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:28:26.212017   93099 main.go:141] libmachine: (newest-cni-374564) Building disk image from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 12:28:26.212039   93099 main.go:141] libmachine: (newest-cni-374564) Downloading /home/jenkins/minikube-integration/19906-19898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1104 12:28:26.469549   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:26.469413   93122 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa...
	I1104 12:28:26.733079   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:26.732973   93122 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/newest-cni-374564.rawdisk...
	I1104 12:28:26.733106   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Writing magic tar header
	I1104 12:28:26.733121   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Writing SSH key tar header
	I1104 12:28:26.733130   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:26.733093   93122 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564 ...
	I1104 12:28:26.733218   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564
	I1104 12:28:26.733275   93099 main.go:141] libmachine: (newest-cni-374564) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564 (perms=drwx------)
	I1104 12:28:26.733291   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube/machines
	I1104 12:28:26.733303   93099 main.go:141] libmachine: (newest-cni-374564) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube/machines (perms=drwxr-xr-x)
	I1104 12:28:26.733322   93099 main.go:141] libmachine: (newest-cni-374564) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898/.minikube (perms=drwxr-xr-x)
	I1104 12:28:26.733335   93099 main.go:141] libmachine: (newest-cni-374564) Setting executable bit set on /home/jenkins/minikube-integration/19906-19898 (perms=drwxrwxr-x)
	I1104 12:28:26.733353   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:28:26.733366   93099 main.go:141] libmachine: (newest-cni-374564) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1104 12:28:26.733379   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19906-19898
	I1104 12:28:26.733392   93099 main.go:141] libmachine: (newest-cni-374564) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1104 12:28:26.733409   93099 main.go:141] libmachine: (newest-cni-374564) Creating domain...
	I1104 12:28:26.733965   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1104 12:28:26.734006   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Checking permissions on dir: /home/jenkins
	I1104 12:28:26.734029   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Checking permissions on dir: /home
	I1104 12:28:26.734048   93099 main.go:141] libmachine: (newest-cni-374564) DBG | Skipping /home - not owner
	I1104 12:28:26.734684   93099 main.go:141] libmachine: (newest-cni-374564) define libvirt domain using xml: 
	I1104 12:28:26.734762   93099 main.go:141] libmachine: (newest-cni-374564) <domain type='kvm'>
	I1104 12:28:26.734798   93099 main.go:141] libmachine: (newest-cni-374564)   <name>newest-cni-374564</name>
	I1104 12:28:26.734830   93099 main.go:141] libmachine: (newest-cni-374564)   <memory unit='MiB'>2200</memory>
	I1104 12:28:26.734853   93099 main.go:141] libmachine: (newest-cni-374564)   <vcpu>2</vcpu>
	I1104 12:28:26.734874   93099 main.go:141] libmachine: (newest-cni-374564)   <features>
	I1104 12:28:26.734905   93099 main.go:141] libmachine: (newest-cni-374564)     <acpi/>
	I1104 12:28:26.734926   93099 main.go:141] libmachine: (newest-cni-374564)     <apic/>
	I1104 12:28:26.734988   93099 main.go:141] libmachine: (newest-cni-374564)     <pae/>
	I1104 12:28:26.735013   93099 main.go:141] libmachine: (newest-cni-374564)     
	I1104 12:28:26.735047   93099 main.go:141] libmachine: (newest-cni-374564)   </features>
	I1104 12:28:26.735064   93099 main.go:141] libmachine: (newest-cni-374564)   <cpu mode='host-passthrough'>
	I1104 12:28:26.735096   93099 main.go:141] libmachine: (newest-cni-374564)   
	I1104 12:28:26.735121   93099 main.go:141] libmachine: (newest-cni-374564)   </cpu>
	I1104 12:28:26.735138   93099 main.go:141] libmachine: (newest-cni-374564)   <os>
	I1104 12:28:26.735154   93099 main.go:141] libmachine: (newest-cni-374564)     <type>hvm</type>
	I1104 12:28:26.735195   93099 main.go:141] libmachine: (newest-cni-374564)     <boot dev='cdrom'/>
	I1104 12:28:26.735217   93099 main.go:141] libmachine: (newest-cni-374564)     <boot dev='hd'/>
	I1104 12:28:26.735238   93099 main.go:141] libmachine: (newest-cni-374564)     <bootmenu enable='no'/>
	I1104 12:28:26.735298   93099 main.go:141] libmachine: (newest-cni-374564)   </os>
	I1104 12:28:26.735431   93099 main.go:141] libmachine: (newest-cni-374564)   <devices>
	I1104 12:28:26.735635   93099 main.go:141] libmachine: (newest-cni-374564)     <disk type='file' device='cdrom'>
	I1104 12:28:26.735657   93099 main.go:141] libmachine: (newest-cni-374564)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/boot2docker.iso'/>
	I1104 12:28:26.735665   93099 main.go:141] libmachine: (newest-cni-374564)       <target dev='hdc' bus='scsi'/>
	I1104 12:28:26.735673   93099 main.go:141] libmachine: (newest-cni-374564)       <readonly/>
	I1104 12:28:26.735679   93099 main.go:141] libmachine: (newest-cni-374564)     </disk>
	I1104 12:28:26.735688   93099 main.go:141] libmachine: (newest-cni-374564)     <disk type='file' device='disk'>
	I1104 12:28:26.735696   93099 main.go:141] libmachine: (newest-cni-374564)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1104 12:28:26.735709   93099 main.go:141] libmachine: (newest-cni-374564)       <source file='/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/newest-cni-374564.rawdisk'/>
	I1104 12:28:26.735722   93099 main.go:141] libmachine: (newest-cni-374564)       <target dev='hda' bus='virtio'/>
	I1104 12:28:26.735733   93099 main.go:141] libmachine: (newest-cni-374564)     </disk>
	I1104 12:28:26.735744   93099 main.go:141] libmachine: (newest-cni-374564)     <interface type='network'>
	I1104 12:28:26.735756   93099 main.go:141] libmachine: (newest-cni-374564)       <source network='mk-newest-cni-374564'/>
	I1104 12:28:26.735765   93099 main.go:141] libmachine: (newest-cni-374564)       <model type='virtio'/>
	I1104 12:28:26.735781   93099 main.go:141] libmachine: (newest-cni-374564)     </interface>
	I1104 12:28:26.735791   93099 main.go:141] libmachine: (newest-cni-374564)     <interface type='network'>
	I1104 12:28:26.735799   93099 main.go:141] libmachine: (newest-cni-374564)       <source network='default'/>
	I1104 12:28:26.735808   93099 main.go:141] libmachine: (newest-cni-374564)       <model type='virtio'/>
	I1104 12:28:26.735835   93099 main.go:141] libmachine: (newest-cni-374564)     </interface>
	I1104 12:28:26.735855   93099 main.go:141] libmachine: (newest-cni-374564)     <serial type='pty'>
	I1104 12:28:26.735863   93099 main.go:141] libmachine: (newest-cni-374564)       <target port='0'/>
	I1104 12:28:26.735871   93099 main.go:141] libmachine: (newest-cni-374564)     </serial>
	I1104 12:28:26.735877   93099 main.go:141] libmachine: (newest-cni-374564)     <console type='pty'>
	I1104 12:28:26.735885   93099 main.go:141] libmachine: (newest-cni-374564)       <target type='serial' port='0'/>
	I1104 12:28:26.735891   93099 main.go:141] libmachine: (newest-cni-374564)     </console>
	I1104 12:28:26.735897   93099 main.go:141] libmachine: (newest-cni-374564)     <rng model='virtio'>
	I1104 12:28:26.735904   93099 main.go:141] libmachine: (newest-cni-374564)       <backend model='random'>/dev/random</backend>
	I1104 12:28:26.735911   93099 main.go:141] libmachine: (newest-cni-374564)     </rng>
	I1104 12:28:26.735916   93099 main.go:141] libmachine: (newest-cni-374564)     
	I1104 12:28:26.735925   93099 main.go:141] libmachine: (newest-cni-374564)     
	I1104 12:28:26.735930   93099 main.go:141] libmachine: (newest-cni-374564)   </devices>
	I1104 12:28:26.735941   93099 main.go:141] libmachine: (newest-cni-374564) </domain>
	I1104 12:28:26.735992   93099 main.go:141] libmachine: (newest-cni-374564) 
	I1104 12:28:26.739700   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:7f:a4:ba in network default
	I1104 12:28:26.740337   93099 main.go:141] libmachine: (newest-cni-374564) Ensuring networks are active...
	I1104 12:28:26.740361   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:26.741038   93099 main.go:141] libmachine: (newest-cni-374564) Ensuring network default is active
	I1104 12:28:26.741377   93099 main.go:141] libmachine: (newest-cni-374564) Ensuring network mk-newest-cni-374564 is active
	I1104 12:28:26.741883   93099 main.go:141] libmachine: (newest-cni-374564) Getting domain xml...
	I1104 12:28:26.742607   93099 main.go:141] libmachine: (newest-cni-374564) Creating domain...
	I1104 12:28:27.995552   93099 main.go:141] libmachine: (newest-cni-374564) Waiting to get IP...
	I1104 12:28:27.996372   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:27.996841   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:27.996883   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:27.996806   93122 retry.go:31] will retry after 260.605084ms: waiting for machine to come up
	I1104 12:28:28.259416   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:28.260045   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:28.260071   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:28.260009   93122 retry.go:31] will retry after 347.511287ms: waiting for machine to come up
	I1104 12:28:28.608775   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:28.609383   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:28.609411   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:28.609328   93122 retry.go:31] will retry after 406.675413ms: waiting for machine to come up
	I1104 12:28:29.017963   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:29.018422   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:29.018451   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:29.018377   93122 retry.go:31] will retry after 376.93871ms: waiting for machine to come up
	I1104 12:28:29.397938   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:29.398555   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:29.398613   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:29.398484   93122 retry.go:31] will retry after 578.578185ms: waiting for machine to come up
	I1104 12:28:29.979210   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:29.979812   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:29.979836   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:29.979765   93122 retry.go:31] will retry after 919.921832ms: waiting for machine to come up
	I1104 12:28:30.901292   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:30.901926   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:30.901953   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:30.901873   93122 retry.go:31] will retry after 752.352357ms: waiting for machine to come up
	I1104 12:28:31.656265   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:31.656753   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:31.656777   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:31.656716   93122 retry.go:31] will retry after 943.494098ms: waiting for machine to come up
	I1104 12:28:32.601758   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:32.602205   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:32.602221   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:32.602156   93122 retry.go:31] will retry after 1.460514677s: waiting for machine to come up
	I1104 12:28:34.064516   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:34.064959   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:34.064979   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:34.064929   93122 retry.go:31] will retry after 1.730969554s: waiting for machine to come up
	I1104 12:28:35.798032   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:35.798554   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:35.798582   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:35.798522   93122 retry.go:31] will retry after 1.904553517s: waiting for machine to come up
	I1104 12:28:37.704694   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:37.705133   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:37.705157   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:37.705087   93122 retry.go:31] will retry after 3.591154101s: waiting for machine to come up
	I1104 12:28:41.298580   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:41.299016   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:41.299041   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:41.298997   93122 retry.go:31] will retry after 3.991284243s: waiting for machine to come up
	I1104 12:28:45.294586   93099 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:28:45.295143   93099 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:28:45.295169   93099 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:28:45.295104   93122 retry.go:31] will retry after 5.533756654s: waiting for machine to come up
	
	
	==> CRI-O <==
	Nov 04 12:28:47 embed-certs-325116 crio[700]: time="2024-11-04 12:28:47.962818787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f7364ca-3726-455d-aa1e-5e9a37a01e94 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:47 embed-certs-325116 crio[700]: time="2024-11-04 12:28:47.963759631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7645a78-ec5e-4390-bdd9-3946add21184 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:47 embed-certs-325116 crio[700]: time="2024-11-04 12:28:47.964191445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723327964167392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7645a78-ec5e-4390-bdd9-3946add21184 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:47 embed-certs-325116 crio[700]: time="2024-11-04 12:28:47.964617634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1437f2c4-aee8-4b1d-8d41-de575a00a56d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:47 embed-certs-325116 crio[700]: time="2024-11-04 12:28:47.964666652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1437f2c4-aee8-4b1d-8d41-de575a00a56d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:47 embed-certs-325116 crio[700]: time="2024-11-04 12:28:47.964850520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1437f2c4-aee8-4b1d-8d41-de575a00a56d name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.008483909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39ade4fd-989f-425c-a627-7a9168b78de2 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.008596333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39ade4fd-989f-425c-a627-7a9168b78de2 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.009795433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0957f036-d04f-45b0-9107-d927ae43bbe2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.010361135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723328010333149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0957f036-d04f-45b0-9107-d927ae43bbe2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.010929441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd8c7884-e6cf-4563-832a-3c3b969bb53f name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.010984660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd8c7884-e6cf-4563-832a-3c3b969bb53f name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.011258236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd8c7884-e6cf-4563-832a-3c3b969bb53f name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.044174596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a477c91-15e9-4fb1-baa6-8e30be04799a name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.044279809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a477c91-15e9-4fb1-baa6-8e30be04799a name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.046591205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3d87c18-944a-4166-8a14-6a3708014e85 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.046991884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723328046967405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3d87c18-944a-4166-8a14-6a3708014e85 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.048096107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19c5c184-1591-4711-8dcf-7dae8c0bf9d2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.048201125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19c5c184-1591-4711-8dcf-7dae8c0bf9d2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.048442632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19c5c184-1591-4711-8dcf-7dae8c0bf9d2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.055746923Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=42dca681-de7c-4921-9950-bcfa02bf9177 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.055954137Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&PodSandboxMetadata{Name:busybox,Uid:faedbe05-e667-443f-9df2-18bb9bf19f99,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722075694828195,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.818698038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mf8xg,Uid:c0162005-7971-4161-9575-9f36c13d54f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722075597183
268,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.818743269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f19fe0600c083a72986d7a4012e850ad00dc9dbd8a51efa5f384b6cc7382869,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-knfd4,Uid:5b3ef856-5b69-44b1-ae29-4a58bf235e41,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722073897454248,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-knfd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3ef856-5b69-44b1-ae29-4a58bf235e41,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.
818693207Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&PodSandboxMetadata{Name:kube-proxy-phzgx,Uid:4ea64f2c-7568-486d-9941-f89ed4221f35,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722068131535157,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221f35,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-04T12:07:47.818745955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722068127073847,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-11-04T12:07:47.818748840Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-325116,Uid:1dc053128fa3b82a73e126c6c1d3a428,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062328743897,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.47:2379,kubernetes.io/config.hash: 1dc053128fa3b82a73e126c6c1d3a428,kubernetes.io/config.seen: 2024-11-04T12:07:41.869744617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-3251
16,Uid:05da92dcb57907443316e8d42e4f92f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062323934763,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05da92dcb57907443316e8d42e4f92f6,kubernetes.io/config.seen: 2024-11-04T12:07:41.819255692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-325116,Uid:c2734426f909645ac2df56eef2ee66f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062322300320,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.47:8443,kubernetes.io/config.hash: c2734426f909645ac2df56eef2ee66f9,kubernetes.io/config.seen: 2024-11-04T12:07:41.819249651Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-325116,Uid:e356f340fd1b91ab3c1748076b1b8c75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730722062316826738,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e356f340fd1b91ab3c1748076b1b
8c75,kubernetes.io/config.seen: 2024-11-04T12:07:41.819254208Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=42dca681-de7c-4921-9950-bcfa02bf9177 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.056768895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11cddc39-9e65-47cf-92fd-3d3ed56921c1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.056859484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11cddc39-9e65-47cf-92fd-3d3ed56921c1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:48 embed-certs-325116 crio[700]: time="2024-11-04 12:28:48.057796434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722099106749149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17930a8c9f8feb57100ebdda160aeff0994c0ea14c95c6a20b8274d3fb3353c7,PodSandboxId:253c7105adc503a8f3b09c0483c61da8474fc59d515f9eda1bababbc055c7042,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722078227013809,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faedbe05-e667-443f-9df2-18bb9bf19f99,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27,PodSandboxId:586d31f23777792aee21d5492feb154dfd04c1e307d27b64490b656e37921d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722075872483907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mf8xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0162005-7971-4161-9575-9f36c13d54f2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7,PodSandboxId:336518a304965b369441b5169d9fa9f4497228136703b26edd53c087aee1b3ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722068303069673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0dabcf5a-028b-4ab6-8af4-be25abaeb9b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0,PodSandboxId:aca6b94caae07b5a74ad36f9c57730f991bec959a1ccd9c1c56e745dca69115a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722068259990465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phzgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ea64f2c-7568-486d-9941-f89ed4221
f35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06,PodSandboxId:68350a02deb9f96554682e48c2d4afb346b74aa306b27cc9bc532880c812da53,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722063732871849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc053128fa3b82a73e126c6c1d3a428,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f,PodSandboxId:0b2c49eb7440715520d33371c0a313d168f992bed024b5b71d1cc12b2b7b61a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722063720504983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05da92dcb57907443316e8d42e4f92f6,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28,PodSandboxId:9ae27a866cd677f921ca52e40a7502b004e363809577197176b61038e3645206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722063722106945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2734426f909645ac2df56eef2ee66f9,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b,PodSandboxId:61b4c93a5104c14282a46db42c284d53e0810b86bf3875378a3bef79d4690984,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722063702577217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-325116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e356f340fd1b91ab3c1748076b1b8c75,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11cddc39-9e65-47cf-92fd-3d3ed56921c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	95a9eb50a127a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   336518a304965       storage-provisioner
	17930a8c9f8fe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   253c7105adc50       busybox
	d1f0c1ed5e891       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   586d31f237777       coredns-7c65d6cfc9-mf8xg
	c7558f4e10871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   336518a304965       storage-provisioner
	512d8563ff2ef       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      20 minutes ago      Running             kube-proxy                1                   aca6b94caae07       kube-proxy-phzgx
	5b575c045ea6e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   68350a02deb9f       etcd-embed-certs-325116
	6e7999c6e5a24       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            1                   9ae27a866cd67       kube-apiserver-embed-certs-325116
	a5a0cb5f09f99       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      21 minutes ago      Running             kube-scheduler            1                   0b2c49eb74407       kube-scheduler-embed-certs-325116
	5751adaa2cf78       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   1                   61b4c93a5104c       kube-controller-manager-embed-certs-325116
	
	
	==> coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40946 - 37698 "HINFO IN 7585893187643998144.4477262375756637392. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020423582s
	
	
	==> describe nodes <==
	Name:               embed-certs-325116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-325116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=embed-certs-325116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T11_59_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 11:59:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-325116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 12:28:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 12:28:43 +0000   Mon, 04 Nov 2024 11:59:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 12:28:43 +0000   Mon, 04 Nov 2024 11:59:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 12:28:43 +0000   Mon, 04 Nov 2024 11:59:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 12:28:43 +0000   Mon, 04 Nov 2024 12:07:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    embed-certs-325116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14a53ffef4d24b9fac22919b5bf74740
	  System UUID:                14a53ffe-f4d2-4b9f-ac22-919b5bf74740
	  Boot ID:                    ce287235-6473-48ce-bd28-1f33727daed3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-mf8xg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-embed-certs-325116                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-embed-certs-325116             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-embed-certs-325116    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-phzgx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-embed-certs-325116             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-knfd4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-325116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-325116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-325116 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node embed-certs-325116 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-325116 event: Registered Node embed-certs-325116 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-325116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-325116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-325116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-325116 event: Registered Node embed-certs-325116 in Controller
	
	
	==> dmesg <==
	[Nov 4 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047803] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036594] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.786556] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.902099] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.528907] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.035515] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.054885] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053207] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.187720] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.131084] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.271873] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +3.915390] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +1.600210] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.059958] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.486181] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.986083] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +3.751816] kauditd_printk_skb: 64 callbacks suppressed
	[Nov 4 12:08] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] <==
	{"level":"warn","ts":"2024-11-04T12:08:20.285245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"625.294499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" ","response":"range_response_count:1 size:766"}
	{"level":"info","ts":"2024-11-04T12:08:20.285284Z","caller":"traceutil/trace.go:171","msg":"trace[567260375] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.1804c28d2aa51540; range_end:; response_count:1; response_revision:590; }","duration":"625.375971ms","start":"2024-11-04T12:08:19.659901Z","end":"2024-11-04T12:08:20.285277Z","steps":["trace[567260375] 'agreement among raft nodes before linearized reading'  (duration: 625.211838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.285304Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:19.659887Z","time spent":"625.411945ms","remote":"127.0.0.1:50152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":790,"request content":"key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.681378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.414124ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13046526760410608442 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" mod_revision:505 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" value_size:668 lease:3823154723555831928 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-04T12:08:20.681536Z","caller":"traceutil/trace.go:171","msg":"trace[1578497151] linearizableReadLoop","detail":"{readStateIndex:630; appliedIndex:629; }","duration":"390.360247ms","start":"2024-11-04T12:08:20.291162Z","end":"2024-11-04T12:08:20.681522Z","steps":["trace[1578497151] 'read index received'  (duration: 123.674841ms)","trace[1578497151] 'applied index is now lower than readState.Index'  (duration: 266.684382ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-04T12:08:20.681569Z","caller":"traceutil/trace.go:171","msg":"trace[610576868] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"391.236232ms","start":"2024-11-04T12:08:20.290320Z","end":"2024-11-04T12:08:20.681556Z","steps":["trace[610576868] 'process raft request'  (duration: 124.592168ms)","trace[610576868] 'compare'  (duration: 266.270817ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T12:08:20.681666Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.290299Z","time spent":"391.320764ms","remote":"127.0.0.1:50152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":751,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" mod_revision:505 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" value_size:668 lease:3823154723555831928 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1804c28d2aa51540\" > >"}
	{"level":"warn","ts":"2024-11-04T12:08:20.681740Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.572432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:08:20.681777Z","caller":"traceutil/trace.go:171","msg":"trace[1183697477] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:591; }","duration":"390.61325ms","start":"2024-11-04T12:08:20.291158Z","end":"2024-11-04T12:08:20.681771Z","steps":["trace[1183697477] 'agreement among raft nodes before linearized reading'  (duration: 390.479384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.681846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.291093Z","time spent":"390.746632ms","remote":"127.0.0.1:50084","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.681957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.743758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-325116\" ","response":"range_response_count:1 size:5752"}
	{"level":"info","ts":"2024-11-04T12:08:20.682943Z","caller":"traceutil/trace.go:171","msg":"trace[1992447961] range","detail":"{range_begin:/registry/minions/embed-certs-325116; range_end:; response_count:1; response_revision:591; }","duration":"391.725677ms","start":"2024-11-04T12:08:20.291207Z","end":"2024-11-04T12:08:20.682932Z","steps":["trace[1992447961] 'agreement among raft nodes before linearized reading'  (duration: 390.692407ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.683881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.291186Z","time spent":"392.68281ms","remote":"127.0.0.1:50240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5776,"request content":"key:\"/registry/minions/embed-certs-325116\" "}
	{"level":"warn","ts":"2024-11-04T12:08:20.683031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.789202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-knfd4\" ","response":"range_response_count:1 size:4340"}
	{"level":"info","ts":"2024-11-04T12:08:20.684311Z","caller":"traceutil/trace.go:171","msg":"trace[1716411033] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-knfd4; range_end:; response_count:1; response_revision:591; }","duration":"393.070573ms","start":"2024-11-04T12:08:20.291231Z","end":"2024-11-04T12:08:20.684301Z","steps":["trace[1716411033] 'agreement among raft nodes before linearized reading'  (duration: 391.21946ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:08:20.684839Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:08:20.291215Z","time spent":"393.611864ms","remote":"127.0.0.1:50250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4364,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-knfd4\" "}
	{"level":"info","ts":"2024-11-04T12:17:45.942110Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":820}
	{"level":"info","ts":"2024-11-04T12:17:45.951356Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":820,"took":"8.984362ms","hash":2746580650,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2609152,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-11-04T12:17:45.951414Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2746580650,"revision":820,"compact-revision":-1}
	{"level":"info","ts":"2024-11-04T12:22:45.947921Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1062}
	{"level":"info","ts":"2024-11-04T12:22:45.950947Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1062,"took":"2.820845ms","hash":2452062974,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1605632,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-11-04T12:22:45.950987Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2452062974,"revision":1062,"compact-revision":820}
	{"level":"info","ts":"2024-11-04T12:27:45.955266Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1305}
	{"level":"info","ts":"2024-11-04T12:27:45.958573Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1305,"took":"3.1196ms","hash":2473740031,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1622016,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-11-04T12:27:45.958618Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2473740031,"revision":1305,"compact-revision":1062}
	
	
	==> kernel <==
	 12:28:48 up 21 min,  0 users,  load average: 0.11, 0.07, 0.08
	Linux embed-certs-325116 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] <==
	I1104 12:25:48.131689       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:25:48.132864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:27:47.131250       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:27:47.131492       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1104 12:27:48.133965       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:27:48.134070       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1104 12:27:48.134179       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:27:48.134197       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1104 12:27:48.135194       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:27:48.135275       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:28:48.136178       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:28:48.136230       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1104 12:28:48.136266       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:28:48.136291       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:28:48.137439       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:28:48.137502       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] <==
	I1104 12:23:21.348792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:23:37.366661       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-325116"
	E1104 12:23:50.883591       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:23:51.355075       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:23:54.902407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="86.11µs"
	I1104 12:24:08.904552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="124.339µs"
	E1104 12:24:20.891629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:24:21.361845       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:24:50.897566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:24:51.370935       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:25:20.906649       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:25:21.381962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:25:50.913475       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:25:51.389768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:26:20.920099       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:26:21.396206       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:26:50.926945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:26:51.402618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:27:20.934401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:27:21.409354       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:27:50.940500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:27:51.416688       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:28:20.945989       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:28:21.423903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:28:43.674898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-325116"
	
	
	==> kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 12:07:48.514943       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 12:07:48.528426       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.47"]
	E1104 12:07:48.528639       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 12:07:48.603457       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 12:07:48.603577       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 12:07:48.603656       1 server_linux.go:169] "Using iptables Proxier"
	I1104 12:07:48.607344       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 12:07:48.607632       1 server.go:483] "Version info" version="v1.31.2"
	I1104 12:07:48.607643       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:07:48.608451       1 config.go:199] "Starting service config controller"
	I1104 12:07:48.608537       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 12:07:48.608502       1 config.go:105] "Starting endpoint slice config controller"
	I1104 12:07:48.608660       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 12:07:48.608791       1 config.go:328] "Starting node config controller"
	I1104 12:07:48.608812       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 12:07:48.709462       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 12:07:48.709504       1 shared_informer.go:320] Caches are synced for service config
	I1104 12:07:48.709576       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] <==
	I1104 12:07:44.389836       1 serving.go:386] Generated self-signed cert in-memory
	W1104 12:07:47.075348       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 12:07:47.075394       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 12:07:47.075435       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 12:07:47.075443       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 12:07:47.118024       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1104 12:07:47.118059       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:07:47.120160       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 12:07:47.120289       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 12:07:47.120411       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1104 12:07:47.120612       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1104 12:07:47.221791       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 12:27:50 embed-certs-325116 kubelet[906]: E1104 12:27:50.888245     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:27:52 embed-certs-325116 kubelet[906]: E1104 12:27:52.141340     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723272140249104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:52 embed-certs-325116 kubelet[906]: E1104 12:27:52.141383     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723272140249104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:02 embed-certs-325116 kubelet[906]: E1104 12:28:02.144146     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723282143221325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:02 embed-certs-325116 kubelet[906]: E1104 12:28:02.144502     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723282143221325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:03 embed-certs-325116 kubelet[906]: E1104 12:28:03.888610     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:28:12 embed-certs-325116 kubelet[906]: E1104 12:28:12.146358     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723292145793711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:12 embed-certs-325116 kubelet[906]: E1104 12:28:12.146915     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723292145793711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:18 embed-certs-325116 kubelet[906]: E1104 12:28:18.889017     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:28:22 embed-certs-325116 kubelet[906]: E1104 12:28:22.151090     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723302149340412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:22 embed-certs-325116 kubelet[906]: E1104 12:28:22.151957     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723302149340412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:29 embed-certs-325116 kubelet[906]: E1104 12:28:29.891446     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	Nov 04 12:28:32 embed-certs-325116 kubelet[906]: E1104 12:28:32.154263     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723312153661290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:32 embed-certs-325116 kubelet[906]: E1104 12:28:32.154625     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723312153661290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:41 embed-certs-325116 kubelet[906]: E1104 12:28:41.916351     906 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 12:28:41 embed-certs-325116 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 12:28:41 embed-certs-325116 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 12:28:41 embed-certs-325116 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 12:28:41 embed-certs-325116 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 12:28:42 embed-certs-325116 kubelet[906]: E1104 12:28:42.156566     906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723322156306633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:42 embed-certs-325116 kubelet[906]: E1104 12:28:42.156590     906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723322156306633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:42 embed-certs-325116 kubelet[906]: E1104 12:28:42.906351     906 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 04 12:28:42 embed-certs-325116 kubelet[906]: E1104 12:28:42.906415     906 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 04 12:28:42 embed-certs-325116 kubelet[906]: E1104 12:28:42.906655     906 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8sp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-knfd4_kube-system(5b3ef856-5b69-44b1-ae29-4a58bf235e41): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Nov 04 12:28:42 embed-certs-325116 kubelet[906]: E1104 12:28:42.907942     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-knfd4" podUID="5b3ef856-5b69-44b1-ae29-4a58bf235e41"
	
	
	==> storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] <==
	I1104 12:08:19.682687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 12:08:19.694503       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 12:08:19.694582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 12:08:37.687106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 12:08:37.687318       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-325116_2b58ea8e-9e9e-47f4-91d4-f8a31f78c568!
	I1104 12:08:37.687310       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad2eac65-348b-49fe-a8c6-4504e588ecb5", APIVersion:"v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-325116_2b58ea8e-9e9e-47f4-91d4-f8a31f78c568 became leader
	I1104 12:08:37.788414       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-325116_2b58ea8e-9e9e-47f4-91d4-f8a31f78c568!
	
	
	==> storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] <==
	I1104 12:07:48.416484       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1104 12:08:18.423015       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-325116 -n embed-certs-325116
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-325116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-knfd4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-325116 describe pod metrics-server-6867b74b74-knfd4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-325116 describe pod metrics-server-6867b74b74-knfd4: exit status 1 (58.379155ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-knfd4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-325116 describe pod metrics-server-6867b74b74-knfd4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (451.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-11-04 12:30:36.34547172 +0000 UTC m=+6827.700530504
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-036892 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (56.768239ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-036892 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-036892 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-036892 logs -n 25: (1.068680519s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC | 04 Nov 24 12:28 UTC |
	| start   | -p newest-cni-374564 --memory=2200 --alsologtostderr   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC | 04 Nov 24 12:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC | 04 Nov 24 12:28 UTC |
	| delete  | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC | 04 Nov 24 12:28 UTC |
	| addons  | enable metrics-server -p newest-cni-374564             | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:29 UTC | 04 Nov 24 12:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-374564                                   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:29 UTC | 04 Nov 24 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-374564                  | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:29 UTC | 04 Nov 24 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-374564 --memory=2200 --alsologtostderr   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:29 UTC | 04 Nov 24 12:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-374564 image list                           | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:29 UTC | 04 Nov 24 12:29 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-374564                                   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:29 UTC | 04 Nov 24 12:29 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-374564                                   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:30 UTC | 04 Nov 24 12:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-374564                                   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:30 UTC | 04 Nov 24 12:30 UTC |
	| delete  | -p newest-cni-374564                                   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:30 UTC | 04 Nov 24 12:30 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:29:23
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:29:23.696968   94038 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:29:23.697087   94038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:29:23.697098   94038 out.go:358] Setting ErrFile to fd 2...
	I1104 12:29:23.697105   94038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:29:23.697328   94038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:29:23.697879   94038 out.go:352] Setting JSON to false
	I1104 12:29:23.698822   94038 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11515,"bootTime":1730711849,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:29:23.698915   94038 start.go:139] virtualization: kvm guest
	I1104 12:29:23.701267   94038 out.go:177] * [newest-cni-374564] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:29:23.702572   94038 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:29:23.702611   94038 notify.go:220] Checking for updates...
	I1104 12:29:23.705168   94038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:29:23.706500   94038 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:29:23.707683   94038 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:29:23.708992   94038 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:29:23.710221   94038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:29:23.711818   94038 config.go:182] Loaded profile config "newest-cni-374564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:29:23.712197   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:23.712238   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:23.727335   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41417
	I1104 12:29:23.727928   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:23.728489   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:23.728509   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:23.728861   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:23.729016   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:23.729276   94038 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:29:23.729578   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:23.729618   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:23.744143   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I1104 12:29:23.744546   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:23.744999   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:23.745018   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:23.745333   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:23.745494   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:23.781515   94038 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:29:23.782796   94038 start.go:297] selected driver: kvm2
	I1104 12:29:23.782806   94038 start.go:901] validating driver "kvm2" against &{Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:29:23.782914   94038 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:29:23.783674   94038 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:29:23.783757   94038 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:29:23.799325   94038 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:29:23.799771   94038 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1104 12:29:23.799801   94038 cni.go:84] Creating CNI manager for ""
	I1104 12:29:23.799836   94038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:29:23.799872   94038 start.go:340] cluster config:
	{Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-374564 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:29:23.800000   94038 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:29:23.802905   94038 out.go:177] * Starting "newest-cni-374564" primary control-plane node in "newest-cni-374564" cluster
	I1104 12:29:23.804086   94038 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:29:23.804120   94038 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:29:23.804141   94038 cache.go:56] Caching tarball of preloaded images
	I1104 12:29:23.804230   94038 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:29:23.804246   94038 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:29:23.804360   94038 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/config.json ...
	I1104 12:29:23.804572   94038 start.go:360] acquireMachinesLock for newest-cni-374564: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:29:23.804616   94038 start.go:364] duration metric: took 25.686µs to acquireMachinesLock for "newest-cni-374564"
	I1104 12:29:23.804635   94038 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:29:23.804644   94038 fix.go:54] fixHost starting: 
	I1104 12:29:23.804919   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:23.804954   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:23.819683   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I1104 12:29:23.820082   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:23.820524   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:23.820538   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:23.820793   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:23.820961   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:23.821071   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetState
	I1104 12:29:23.822787   94038 fix.go:112] recreateIfNeeded on newest-cni-374564: state=Stopped err=<nil>
	I1104 12:29:23.822813   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	W1104 12:29:23.822965   94038 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:29:23.825134   94038 out.go:177] * Restarting existing kvm2 VM for "newest-cni-374564" ...
	I1104 12:29:23.826617   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Start
	I1104 12:29:23.826797   94038 main.go:141] libmachine: (newest-cni-374564) Ensuring networks are active...
	I1104 12:29:23.827563   94038 main.go:141] libmachine: (newest-cni-374564) Ensuring network default is active
	I1104 12:29:23.827899   94038 main.go:141] libmachine: (newest-cni-374564) Ensuring network mk-newest-cni-374564 is active
	I1104 12:29:23.828282   94038 main.go:141] libmachine: (newest-cni-374564) Getting domain xml...
	I1104 12:29:23.829086   94038 main.go:141] libmachine: (newest-cni-374564) Creating domain...
	I1104 12:29:25.036543   94038 main.go:141] libmachine: (newest-cni-374564) Waiting to get IP...
	I1104 12:29:25.037374   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:25.037784   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:25.037849   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:25.037786   94073 retry.go:31] will retry after 247.092036ms: waiting for machine to come up
	I1104 12:29:25.286188   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:25.286614   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:25.286643   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:25.286568   94073 retry.go:31] will retry after 324.722588ms: waiting for machine to come up
	I1104 12:29:25.613260   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:25.613618   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:25.613640   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:25.613588   94073 retry.go:31] will retry after 403.9618ms: waiting for machine to come up
	I1104 12:29:26.019055   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:26.019513   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:26.019541   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:26.019460   94073 retry.go:31] will retry after 383.831281ms: waiting for machine to come up
	I1104 12:29:26.404904   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:26.405337   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:26.405359   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:26.405307   94073 retry.go:31] will retry after 734.016738ms: waiting for machine to come up
	I1104 12:29:27.141434   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:27.141900   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:27.141932   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:27.141841   94073 retry.go:31] will retry after 916.776981ms: waiting for machine to come up
	I1104 12:29:28.059723   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:28.060094   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:28.060128   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:28.060058   94073 retry.go:31] will retry after 1.09327384s: waiting for machine to come up
	I1104 12:29:29.154723   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:29.155086   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:29.155108   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:29.155038   94073 retry.go:31] will retry after 1.449634082s: waiting for machine to come up
	I1104 12:29:30.606689   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:30.607256   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:30.607285   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:30.607206   94073 retry.go:31] will retry after 1.16356841s: waiting for machine to come up
	I1104 12:29:31.772607   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:31.773083   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:31.773112   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:31.773042   94073 retry.go:31] will retry after 1.74964573s: waiting for machine to come up
	I1104 12:29:33.524578   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:33.525125   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:33.525176   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:33.525081   94073 retry.go:31] will retry after 1.990600001s: waiting for machine to come up
	I1104 12:29:35.518217   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:35.518684   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:35.518708   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:35.518644   94073 retry.go:31] will retry after 2.481296069s: waiting for machine to come up
	I1104 12:29:38.002319   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:38.002859   94038 main.go:141] libmachine: (newest-cni-374564) DBG | unable to find current IP address of domain newest-cni-374564 in network mk-newest-cni-374564
	I1104 12:29:38.002908   94038 main.go:141] libmachine: (newest-cni-374564) DBG | I1104 12:29:38.002839   94073 retry.go:31] will retry after 3.404048748s: waiting for machine to come up
	I1104 12:29:41.410058   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.410606   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has current primary IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.410629   94038 main.go:141] libmachine: (newest-cni-374564) Found IP for machine: 192.168.50.24
	I1104 12:29:41.410643   94038 main.go:141] libmachine: (newest-cni-374564) Reserving static IP address...
	I1104 12:29:41.411031   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "newest-cni-374564", mac: "52:54:00:ed:c9:c8", ip: "192.168.50.24"} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.411055   94038 main.go:141] libmachine: (newest-cni-374564) Reserved static IP address: 192.168.50.24
	I1104 12:29:41.411067   94038 main.go:141] libmachine: (newest-cni-374564) DBG | skip adding static IP to network mk-newest-cni-374564 - found existing host DHCP lease matching {name: "newest-cni-374564", mac: "52:54:00:ed:c9:c8", ip: "192.168.50.24"}
	I1104 12:29:41.411086   94038 main.go:141] libmachine: (newest-cni-374564) Waiting for SSH to be available...
	I1104 12:29:41.411099   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Getting to WaitForSSH function...
	I1104 12:29:41.413259   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.413624   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.413651   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.413735   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Using SSH client type: external
	I1104 12:29:41.413758   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa (-rw-------)
	I1104 12:29:41.413785   94038 main.go:141] libmachine: (newest-cni-374564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:29:41.413797   94038 main.go:141] libmachine: (newest-cni-374564) DBG | About to run SSH command:
	I1104 12:29:41.413810   94038 main.go:141] libmachine: (newest-cni-374564) DBG | exit 0
	I1104 12:29:41.533361   94038 main.go:141] libmachine: (newest-cni-374564) DBG | SSH cmd err, output: <nil>: 
	I1104 12:29:41.533703   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetConfigRaw
	I1104 12:29:41.534283   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetIP
	I1104 12:29:41.536751   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.537043   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.537071   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.537366   94038 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/config.json ...
	I1104 12:29:41.537600   94038 machine.go:93] provisionDockerMachine start ...
	I1104 12:29:41.537623   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:41.537824   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:41.540043   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.540411   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.540441   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.540482   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:41.540698   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:41.540831   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:41.541002   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:41.541257   94038 main.go:141] libmachine: Using SSH client type: native
	I1104 12:29:41.541477   94038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I1104 12:29:41.541489   94038 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:29:41.641311   94038 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:29:41.641338   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetMachineName
	I1104 12:29:41.641615   94038 buildroot.go:166] provisioning hostname "newest-cni-374564"
	I1104 12:29:41.641647   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetMachineName
	I1104 12:29:41.641857   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:41.644512   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.644874   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.644897   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.645035   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:41.645272   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:41.645499   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:41.645705   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:41.645965   94038 main.go:141] libmachine: Using SSH client type: native
	I1104 12:29:41.646126   94038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I1104 12:29:41.646138   94038 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-374564 && echo "newest-cni-374564" | sudo tee /etc/hostname
	I1104 12:29:41.754022   94038 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374564
	
	I1104 12:29:41.754052   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:41.756741   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.757096   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.757120   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.757307   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:41.757495   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:41.757656   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:41.757766   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:41.757904   94038 main.go:141] libmachine: Using SSH client type: native
	I1104 12:29:41.758068   94038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I1104 12:29:41.758084   94038 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-374564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-374564/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-374564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:29:41.865218   94038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:29:41.865270   94038 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:29:41.865326   94038 buildroot.go:174] setting up certificates
	I1104 12:29:41.865338   94038 provision.go:84] configureAuth start
	I1104 12:29:41.865354   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetMachineName
	I1104 12:29:41.865664   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetIP
	I1104 12:29:41.868201   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.868508   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.868535   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.868729   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:41.870919   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.871269   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.871297   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.871463   94038 provision.go:143] copyHostCerts
	I1104 12:29:41.871521   94038 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:29:41.871531   94038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:29:41.871592   94038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:29:41.871691   94038 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:29:41.871700   94038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:29:41.871725   94038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:29:41.871778   94038 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:29:41.871786   94038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:29:41.871807   94038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:29:41.871851   94038 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.newest-cni-374564 san=[127.0.0.1 192.168.50.24 localhost minikube newest-cni-374564]
	I1104 12:29:41.953946   94038 provision.go:177] copyRemoteCerts
	I1104 12:29:41.954003   94038 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:29:41.954027   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:41.956781   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.957193   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:41.957216   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:41.957468   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:41.957645   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:41.957823   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:41.957937   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:42.035129   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:29:42.059221   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:29:42.084704   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:29:42.107746   94038 provision.go:87] duration metric: took 242.393566ms to configureAuth
	I1104 12:29:42.107774   94038 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:29:42.107953   94038 config.go:182] Loaded profile config "newest-cni-374564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:29:42.108033   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:42.110452   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.110788   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:42.110810   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.110996   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:42.111193   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:42.111363   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:42.111519   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:42.111677   94038 main.go:141] libmachine: Using SSH client type: native
	I1104 12:29:42.111824   94038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I1104 12:29:42.111840   94038 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:29:42.315276   94038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:29:42.315302   94038 machine.go:96] duration metric: took 777.68732ms to provisionDockerMachine
	I1104 12:29:42.315313   94038 start.go:293] postStartSetup for "newest-cni-374564" (driver="kvm2")
	I1104 12:29:42.315323   94038 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:29:42.315338   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:42.315656   94038 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:29:42.315685   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:42.318256   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.318612   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:42.318654   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.318734   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:42.318911   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:42.319053   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:42.319362   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:42.395003   94038 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:29:42.398642   94038 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:29:42.398664   94038 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:29:42.398736   94038 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:29:42.398821   94038 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:29:42.398926   94038 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:29:42.407928   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:29:42.431055   94038 start.go:296] duration metric: took 115.729365ms for postStartSetup
	I1104 12:29:42.431099   94038 fix.go:56] duration metric: took 18.626453595s for fixHost
	I1104 12:29:42.431121   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:42.433772   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.434041   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:42.434071   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.434195   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:42.434375   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:42.434517   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:42.434663   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:42.434806   94038 main.go:141] libmachine: Using SSH client type: native
	I1104 12:29:42.434988   94038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.24 22 <nil> <nil>}
	I1104 12:29:42.435001   94038 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:29:42.529677   94038 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730723382.491468157
	
	I1104 12:29:42.529702   94038 fix.go:216] guest clock: 1730723382.491468157
	I1104 12:29:42.529712   94038 fix.go:229] Guest: 2024-11-04 12:29:42.491468157 +0000 UTC Remote: 2024-11-04 12:29:42.431103501 +0000 UTC m=+18.772839907 (delta=60.364656ms)
	I1104 12:29:42.529736   94038 fix.go:200] guest clock delta is within tolerance: 60.364656ms
	I1104 12:29:42.529757   94038 start.go:83] releasing machines lock for "newest-cni-374564", held for 18.725128762s
	I1104 12:29:42.529787   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:42.530041   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetIP
	I1104 12:29:42.532584   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.532915   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:42.532944   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.533116   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:42.533696   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:42.533890   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:42.533978   94038 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:29:42.534031   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:42.534088   94038 ssh_runner.go:195] Run: cat /version.json
	I1104 12:29:42.534115   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:42.537016   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.537184   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.537379   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:42.537410   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.537598   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:42.537629   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:42.537633   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:42.537780   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:42.537838   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:42.537908   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:42.537992   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:42.538059   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:42.538122   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:42.538154   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:42.617712   94038 ssh_runner.go:195] Run: systemctl --version
	I1104 12:29:42.637944   94038 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:29:42.776598   94038 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:29:42.782747   94038 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:29:42.782812   94038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:29:42.798737   94038 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:29:42.798762   94038 start.go:495] detecting cgroup driver to use...
	I1104 12:29:42.798817   94038 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:29:42.814683   94038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:29:42.827722   94038 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:29:42.827775   94038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:29:42.841700   94038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:29:42.855608   94038 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:29:42.968967   94038 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:29:43.107609   94038 docker.go:233] disabling docker service ...
	I1104 12:29:43.107687   94038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:29:43.122059   94038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:29:43.134861   94038 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:29:43.265462   94038 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:29:43.395304   94038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:29:43.408611   94038 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:29:43.425634   94038 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:29:43.425696   94038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:29:43.435188   94038 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:29:43.435259   94038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:29:43.444885   94038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:29:43.454269   94038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:29:43.464038   94038 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:29:43.473809   94038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:29:43.483546   94038 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:29:43.500212   94038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:29:43.510009   94038 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:29:43.518786   94038 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:29:43.518863   94038 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:29:43.531575   94038 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:29:43.541061   94038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:29:43.658364   94038 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:29:43.748324   94038 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:29:43.748398   94038 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:29:43.752980   94038 start.go:563] Will wait 60s for crictl version
	I1104 12:29:43.753026   94038 ssh_runner.go:195] Run: which crictl
	I1104 12:29:43.756731   94038 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:29:43.791536   94038 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:29:43.791643   94038 ssh_runner.go:195] Run: crio --version
	I1104 12:29:43.817629   94038 ssh_runner.go:195] Run: crio --version
	I1104 12:29:43.845854   94038 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:29:43.847241   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetIP
	I1104 12:29:43.849730   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:43.850049   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:43.850077   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:43.850457   94038 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:29:43.854687   94038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:29:43.868366   94038 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1104 12:29:43.869623   94038 kubeadm.go:883] updating cluster {Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:29:43.869730   94038 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:29:43.869789   94038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:29:43.903050   94038 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:29:43.903105   94038 ssh_runner.go:195] Run: which lz4
	I1104 12:29:43.906862   94038 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:29:43.910482   94038 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:29:43.910506   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:29:45.119899   94038 crio.go:462] duration metric: took 1.213052585s to copy over tarball
	I1104 12:29:45.119975   94038 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:29:47.216612   94038 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.096602473s)
	I1104 12:29:47.216653   94038 crio.go:469] duration metric: took 2.096721558s to extract the tarball
	I1104 12:29:47.216661   94038 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:29:47.253202   94038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:29:47.292146   94038 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:29:47.292171   94038 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:29:47.292179   94038 kubeadm.go:934] updating node { 192.168.50.24 8443 v1.31.2 crio true true} ...
	I1104 12:29:47.292295   94038 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-374564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:29:47.292377   94038 ssh_runner.go:195] Run: crio config
	I1104 12:29:47.348642   94038 cni.go:84] Creating CNI manager for ""
	I1104 12:29:47.348666   94038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:29:47.348679   94038 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1104 12:29:47.348709   94038 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.24 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-374564 NodeName:newest-cni-374564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.50.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:29:47.348896   94038 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-374564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:29:47.348975   94038 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:29:47.359151   94038 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:29:47.359215   94038 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:29:47.368514   94038 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I1104 12:29:47.384605   94038 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:29:47.399758   94038 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2484 bytes)
	I1104 12:29:47.415862   94038 ssh_runner.go:195] Run: grep 192.168.50.24	control-plane.minikube.internal$ /etc/hosts
	I1104 12:29:47.419453   94038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:29:47.430426   94038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:29:47.562182   94038 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:29:47.579139   94038 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564 for IP: 192.168.50.24
	I1104 12:29:47.579167   94038 certs.go:194] generating shared ca certs ...
	I1104 12:29:47.579187   94038 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:29:47.579373   94038 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:29:47.579435   94038 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:29:47.579451   94038 certs.go:256] generating profile certs ...
	I1104 12:29:47.579529   94038 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/client.key
	I1104 12:29:47.579596   94038 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/apiserver.key.95eb1bf9
	I1104 12:29:47.579653   94038 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/proxy-client.key
	I1104 12:29:47.579815   94038 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:29:47.579862   94038 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:29:47.579876   94038 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:29:47.579910   94038 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:29:47.579968   94038 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:29:47.580004   94038 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:29:47.580060   94038 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:29:47.580916   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:29:47.619024   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:29:47.651022   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:29:47.674993   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:29:47.703428   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 12:29:47.739305   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:29:47.762725   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:29:47.785317   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:29:47.807928   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:29:47.829913   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:29:47.851692   94038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:29:47.874121   94038 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:29:47.890122   94038 ssh_runner.go:195] Run: openssl version
	I1104 12:29:47.895491   94038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:29:47.905417   94038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:29:47.909539   94038 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:29:47.909593   94038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:29:47.914979   94038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:29:47.924587   94038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:29:47.934248   94038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:29:47.938236   94038 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:29:47.938278   94038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:29:47.943428   94038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:29:47.959121   94038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:29:47.970328   94038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:29:47.974491   94038 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:29:47.974550   94038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:29:47.980052   94038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:29:47.989930   94038 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:29:47.994239   94038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:29:47.999859   94038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:29:48.005698   94038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:29:48.011514   94038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:29:48.016948   94038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:29:48.022227   94038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:29:48.027311   94038 kubeadm.go:392] StartCluster: {Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:29:48.027403   94038 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:29:48.027442   94038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:29:48.061018   94038 cri.go:89] found id: ""
	I1104 12:29:48.061077   94038 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:29:48.070658   94038 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:29:48.070678   94038 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:29:48.070722   94038 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:29:48.079575   94038 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:29:48.080111   94038 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-374564" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:29:48.080376   94038 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-374564" cluster setting kubeconfig missing "newest-cni-374564" context setting]
	I1104 12:29:48.080775   94038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:29:48.081914   94038 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:29:48.090579   94038 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.24
	I1104 12:29:48.090602   94038 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:29:48.090614   94038 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:29:48.090657   94038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:29:48.122915   94038 cri.go:89] found id: ""
	I1104 12:29:48.122983   94038 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:29:48.137942   94038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:29:48.147351   94038 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:29:48.147376   94038 kubeadm.go:157] found existing configuration files:
	
	I1104 12:29:48.147418   94038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:29:48.156445   94038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:29:48.156502   94038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:29:48.165894   94038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:29:48.174581   94038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:29:48.174649   94038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:29:48.183789   94038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:29:48.192357   94038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:29:48.192419   94038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:29:48.201567   94038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:29:48.209868   94038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:29:48.209915   94038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:29:48.218676   94038 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:29:48.227597   94038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:29:48.321532   94038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:29:49.310971   94038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:29:49.509772   94038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:29:49.577191   94038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:29:49.661072   94038 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:29:49.661221   94038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:29:50.161342   94038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:29:50.661437   94038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:29:51.162114   94038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:29:51.661403   94038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:29:51.674840   94038 api_server.go:72] duration metric: took 2.013767843s to wait for apiserver process to appear ...
	I1104 12:29:51.674872   94038 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:29:51.674895   94038 api_server.go:253] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I1104 12:29:54.018893   94038 api_server.go:279] https://192.168.50.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:29:54.018921   94038 api_server.go:103] status: https://192.168.50.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:29:54.018933   94038 api_server.go:253] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I1104 12:29:54.053468   94038 api_server.go:279] https://192.168.50.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:29:54.053500   94038 api_server.go:103] status: https://192.168.50.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:29:54.175809   94038 api_server.go:253] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I1104 12:29:54.181016   94038 api_server.go:279] https://192.168.50.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:29:54.181048   94038 api_server.go:103] status: https://192.168.50.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:29:54.675687   94038 api_server.go:253] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I1104 12:29:54.682715   94038 api_server.go:279] https://192.168.50.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:29:54.682740   94038 api_server.go:103] status: https://192.168.50.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:29:55.175277   94038 api_server.go:253] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I1104 12:29:55.180914   94038 api_server.go:279] https://192.168.50.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:29:55.180945   94038 api_server.go:103] status: https://192.168.50.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:29:55.675033   94038 api_server.go:253] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I1104 12:29:55.679418   94038 api_server.go:279] https://192.168.50.24:8443/healthz returned 200:
	ok
	I1104 12:29:55.685671   94038 api_server.go:141] control plane version: v1.31.2
	I1104 12:29:55.685712   94038 api_server.go:131] duration metric: took 4.010831889s to wait for apiserver health ...
	I1104 12:29:55.685723   94038 cni.go:84] Creating CNI manager for ""
	I1104 12:29:55.685730   94038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:29:55.687642   94038 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:29:55.689147   94038 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:29:55.700417   94038 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:29:55.718656   94038 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:29:55.729381   94038 system_pods.go:59] 8 kube-system pods found
	I1104 12:29:55.729413   94038 system_pods.go:61] "coredns-7c65d6cfc9-p4c7f" [412a6743-ca36-422f-b909-5ef1bf6ca37d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:29:55.729421   94038 system_pods.go:61] "etcd-newest-cni-374564" [6517ee44-4959-43ee-b1f9-8d5222159e7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:29:55.729430   94038 system_pods.go:61] "kube-apiserver-newest-cni-374564" [97880f7f-4e79-47d9-987c-12b14dd803c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:29:55.729439   94038 system_pods.go:61] "kube-controller-manager-newest-cni-374564" [8270ba8b-02ac-4862-9cb1-862472435483] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:29:55.729446   94038 system_pods.go:61] "kube-proxy-f6lzk" [3b6de1bb-7076-4192-8176-4e8c74f4f760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:29:55.729453   94038 system_pods.go:61] "kube-scheduler-newest-cni-374564" [2569aefd-4380-4be1-934c-e48c56d6588d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:29:55.729460   94038 system_pods.go:61] "metrics-server-6867b74b74-86sz7" [2c2b7246-afde-45a5-9a76-98260f48b46e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:29:55.729469   94038 system_pods.go:61] "storage-provisioner" [c6185f14-417a-4660-b40f-b715ce278ecf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:29:55.729514   94038 system_pods.go:74] duration metric: took 10.838961ms to wait for pod list to return data ...
	I1104 12:29:55.729521   94038 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:29:55.737595   94038 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:29:55.737625   94038 node_conditions.go:123] node cpu capacity is 2
	I1104 12:29:55.737636   94038 node_conditions.go:105] duration metric: took 8.111559ms to run NodePressure ...
	I1104 12:29:55.737655   94038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:29:56.005442   94038 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:29:56.016246   94038 ops.go:34] apiserver oom_adj: -16
	I1104 12:29:56.016277   94038 kubeadm.go:597] duration metric: took 7.945590917s to restartPrimaryControlPlane
	I1104 12:29:56.016288   94038 kubeadm.go:394] duration metric: took 7.988981982s to StartCluster
	I1104 12:29:56.016307   94038 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:29:56.016393   94038 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:29:56.017198   94038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:29:56.017469   94038 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:29:56.017553   94038 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:29:56.017652   94038 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-374564"
	I1104 12:29:56.017671   94038 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-374564"
	W1104 12:29:56.017682   94038 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:29:56.017687   94038 addons.go:69] Setting default-storageclass=true in profile "newest-cni-374564"
	I1104 12:29:56.017704   94038 config.go:182] Loaded profile config "newest-cni-374564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:29:56.017713   94038 host.go:66] Checking if "newest-cni-374564" exists ...
	I1104 12:29:56.017720   94038 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-374564"
	I1104 12:29:56.017722   94038 addons.go:69] Setting dashboard=true in profile "newest-cni-374564"
	I1104 12:29:56.017750   94038 addons.go:234] Setting addon dashboard=true in "newest-cni-374564"
	W1104 12:29:56.017760   94038 addons.go:243] addon dashboard should already be in state true
	I1104 12:29:56.017797   94038 host.go:66] Checking if "newest-cni-374564" exists ...
	I1104 12:29:56.017748   94038 addons.go:69] Setting metrics-server=true in profile "newest-cni-374564"
	I1104 12:29:56.017863   94038 addons.go:234] Setting addon metrics-server=true in "newest-cni-374564"
	W1104 12:29:56.017872   94038 addons.go:243] addon metrics-server should already be in state true
	I1104 12:29:56.017897   94038 host.go:66] Checking if "newest-cni-374564" exists ...
	I1104 12:29:56.018068   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.018120   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.018150   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.018129   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.018181   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.018246   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.018257   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.018270   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.019956   94038 out.go:177] * Verifying Kubernetes components...
	I1104 12:29:56.021427   94038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:29:56.035337   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I1104 12:29:56.035513   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1104 12:29:56.035990   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.036023   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.036503   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.036523   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.036611   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.036623   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.036954   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.036954   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.037517   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.037539   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.037899   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I1104 12:29:56.037905   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1104 12:29:56.037906   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetState
	I1104 12:29:56.038454   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.038483   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.038975   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.038992   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.039149   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.039163   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.039558   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.039597   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.040057   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.040072   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.040100   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.040125   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.054345   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I1104 12:29:56.054850   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.055276   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.055294   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.055436   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1104 12:29:56.055667   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.055804   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.055884   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetState
	I1104 12:29:56.056337   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.056355   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.056735   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.056922   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetState
	I1104 12:29:56.057479   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:56.058674   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:56.059471   94038 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:29:56.060301   94038 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1104 12:29:56.060998   94038 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:29:56.061008   94038 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:29:56.061023   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:56.062090   94038 addons.go:234] Setting addon default-storageclass=true in "newest-cni-374564"
	W1104 12:29:56.062105   94038 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:29:56.062127   94038 host.go:66] Checking if "newest-cni-374564" exists ...
	I1104 12:29:56.062464   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.062498   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.063066   94038 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1104 12:29:56.064372   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1104 12:29:56.064391   94038 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1104 12:29:56.064411   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:56.065838   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.067426   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:56.067453   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.067922   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:56.068165   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:56.068283   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:56.068372   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:56.069040   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.069391   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:56.069410   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.069662   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:56.069837   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:56.069985   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:56.070090   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:56.078504   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I1104 12:29:56.078966   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.079474   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.079520   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.079647   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I1104 12:29:56.079835   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.080009   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetState
	I1104 12:29:56.080155   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.080617   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.080633   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.081037   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.081598   94038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:29:56.081632   94038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:29:56.082081   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:56.084212   94038 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:29:56.085669   94038 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:29:56.085687   94038 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:29:56.085703   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:56.088442   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.088718   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:56.088752   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.088923   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:56.089060   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:56.089144   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:56.089250   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:56.099846   94038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39883
	I1104 12:29:56.100420   94038 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:29:56.100984   94038 main.go:141] libmachine: Using API Version  1
	I1104 12:29:56.101006   94038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:29:56.101392   94038 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:29:56.101583   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetState
	I1104 12:29:56.103244   94038 main.go:141] libmachine: (newest-cni-374564) Calling .DriverName
	I1104 12:29:56.103458   94038 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:29:56.103474   94038 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:29:56.103487   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHHostname
	I1104 12:29:56.106714   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.107053   94038 main.go:141] libmachine: (newest-cni-374564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c9:c8", ip: ""} in network mk-newest-cni-374564: {Iface:virbr2 ExpiryTime:2024-11-04 13:29:34 +0000 UTC Type:0 Mac:52:54:00:ed:c9:c8 Iaid: IPaddr:192.168.50.24 Prefix:24 Hostname:newest-cni-374564 Clientid:01:52:54:00:ed:c9:c8}
	I1104 12:29:56.107111   94038 main.go:141] libmachine: (newest-cni-374564) DBG | domain newest-cni-374564 has defined IP address 192.168.50.24 and MAC address 52:54:00:ed:c9:c8 in network mk-newest-cni-374564
	I1104 12:29:56.107225   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHPort
	I1104 12:29:56.107408   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHKeyPath
	I1104 12:29:56.107546   94038 main.go:141] libmachine: (newest-cni-374564) Calling .GetSSHUsername
	I1104 12:29:56.107670   94038 sshutil.go:53] new ssh client: &{IP:192.168.50.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/newest-cni-374564/id_rsa Username:docker}
	I1104 12:29:56.251142   94038 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:29:56.281262   94038 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:29:56.281351   94038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:29:56.295959   94038 api_server.go:72] duration metric: took 278.453941ms to wait for apiserver process to appear ...
	I1104 12:29:56.295982   94038 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:29:56.296002   94038 api_server.go:253] Checking apiserver healthz at https://192.168.50.24:8443/healthz ...
	I1104 12:29:56.305788   94038 api_server.go:279] https://192.168.50.24:8443/healthz returned 200:
	ok
	I1104 12:29:56.306877   94038 api_server.go:141] control plane version: v1.31.2
	I1104 12:29:56.306895   94038 api_server.go:131] duration metric: took 10.906949ms to wait for apiserver health ...
	I1104 12:29:56.306902   94038 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:29:56.314780   94038 system_pods.go:59] 8 kube-system pods found
	I1104 12:29:56.314823   94038 system_pods.go:61] "coredns-7c65d6cfc9-p4c7f" [412a6743-ca36-422f-b909-5ef1bf6ca37d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:29:56.314835   94038 system_pods.go:61] "etcd-newest-cni-374564" [6517ee44-4959-43ee-b1f9-8d5222159e7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:29:56.314846   94038 system_pods.go:61] "kube-apiserver-newest-cni-374564" [97880f7f-4e79-47d9-987c-12b14dd803c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:29:56.314866   94038 system_pods.go:61] "kube-controller-manager-newest-cni-374564" [8270ba8b-02ac-4862-9cb1-862472435483] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:29:56.314877   94038 system_pods.go:61] "kube-proxy-f6lzk" [3b6de1bb-7076-4192-8176-4e8c74f4f760] Running
	I1104 12:29:56.314887   94038 system_pods.go:61] "kube-scheduler-newest-cni-374564" [2569aefd-4380-4be1-934c-e48c56d6588d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:29:56.314904   94038 system_pods.go:61] "metrics-server-6867b74b74-86sz7" [2c2b7246-afde-45a5-9a76-98260f48b46e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:29:56.314919   94038 system_pods.go:61] "storage-provisioner" [c6185f14-417a-4660-b40f-b715ce278ecf] Running
	I1104 12:29:56.314933   94038 system_pods.go:74] duration metric: took 8.024744ms to wait for pod list to return data ...
	I1104 12:29:56.314942   94038 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:29:56.319205   94038 default_sa.go:45] found service account: "default"
	I1104 12:29:56.319234   94038 default_sa.go:55] duration metric: took 4.2795ms for default service account to be created ...
	I1104 12:29:56.319247   94038 kubeadm.go:582] duration metric: took 301.74553ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1104 12:29:56.319271   94038 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:29:56.325411   94038 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:29:56.325459   94038 node_conditions.go:123] node cpu capacity is 2
	I1104 12:29:56.325476   94038 node_conditions.go:105] duration metric: took 6.199714ms to run NodePressure ...
	I1104 12:29:56.325489   94038 start.go:241] waiting for startup goroutines ...
	I1104 12:29:56.332665   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1104 12:29:56.332686   94038 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1104 12:29:56.350854   94038 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:29:56.351700   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1104 12:29:56.351721   94038 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1104 12:29:56.364671   94038 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:29:56.372313   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1104 12:29:56.372340   94038 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1104 12:29:56.400303   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1104 12:29:56.400326   94038 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1104 12:29:56.424916   94038 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:29:56.424944   94038 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:29:56.453609   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1104 12:29:56.453637   94038 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1104 12:29:56.481543   94038 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:29:56.481573   94038 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:29:56.508171   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1104 12:29:56.508200   94038 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1104 12:29:56.524865   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1104 12:29:56.524891   94038 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1104 12:29:56.543725   94038 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:29:56.543752   94038 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:29:56.581284   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1104 12:29:56.581308   94038 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1104 12:29:56.612069   94038 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:29:56.643280   94038 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1104 12:29:56.643317   94038 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1104 12:29:56.697052   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:56.697080   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:56.697418   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:56.697440   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:56.697450   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:56.697458   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:56.697420   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Closing plugin on server side
	I1104 12:29:56.697686   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:56.697706   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:56.697734   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Closing plugin on server side
	I1104 12:29:56.699804   94038 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1104 12:29:56.708523   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:56.708540   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:56.708812   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Closing plugin on server side
	I1104 12:29:56.708867   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:56.708882   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:58.162174   94038 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.797464013s)
	I1104 12:29:58.162227   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:58.162241   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:58.162335   94038 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.550224387s)
	I1104 12:29:58.162423   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:58.162437   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:58.162552   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:58.162570   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:58.162580   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:58.162587   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:58.162736   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:58.162748   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:58.162770   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:58.162810   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:58.162845   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Closing plugin on server side
	I1104 12:29:58.162883   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:58.162894   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:58.163019   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Closing plugin on server side
	I1104 12:29:58.163021   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:58.163043   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:58.163058   94038 addons.go:475] Verifying addon metrics-server=true in "newest-cni-374564"
	I1104 12:29:58.266743   94038 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.566894743s)
	I1104 12:29:58.266819   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:58.266840   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:58.267166   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:58.267182   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Closing plugin on server side
	I1104 12:29:58.267184   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:58.267196   94038 main.go:141] libmachine: Making call to close driver server
	I1104 12:29:58.267204   94038 main.go:141] libmachine: (newest-cni-374564) Calling .Close
	I1104 12:29:58.267380   94038 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:29:58.267391   94038 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:29:58.267442   94038 main.go:141] libmachine: (newest-cni-374564) DBG | Closing plugin on server side
	I1104 12:29:58.269130   94038 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-374564 addons enable metrics-server
	
	I1104 12:29:58.270666   94038 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1104 12:29:58.271892   94038 addons.go:510] duration metric: took 2.254347526s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1104 12:29:58.271937   94038 start.go:246] waiting for cluster config update ...
	I1104 12:29:58.271952   94038 start.go:255] writing updated cluster config ...
	I1104 12:29:58.272264   94038 ssh_runner.go:195] Run: rm -f paused
	I1104 12:29:58.324286   94038 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:29:58.325602   94038 out.go:177] * Done! kubectl is now configured to use "newest-cni-374564" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.958762418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723436958327538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebb01305-61a9-409a-b03b-cacab6e7ef17 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.959367278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33bbcef5-4f95-4ed9-8b94-047118efd5be name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.959562992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33bbcef5-4f95-4ed9-8b94-047118efd5be name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.959959688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33bbcef5-4f95-4ed9-8b94-047118efd5be name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.995324372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8356f834-fb51-45d6-82ad-73d688b19ec7 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.995439524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8356f834-fb51-45d6-82ad-73d688b19ec7 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.996702589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03ebfa14-3d54-4cd7-b44f-6c9925d936b1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.997096619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723436997075068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03ebfa14-3d54-4cd7-b44f-6c9925d936b1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.997678349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=769aeebf-45ee-4081-a736-eafd35091407 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.997742139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=769aeebf-45ee-4081-a736-eafd35091407 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:36 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:36.997926280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=769aeebf-45ee-4081-a736-eafd35091407 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.038489009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54b39d2a-3b35-4cb5-a630-dd4441699cb1 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.038585217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54b39d2a-3b35-4cb5-a630-dd4441699cb1 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.039742933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4340f0f5-d482-4b1a-8d9e-4461027478f5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.040144281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723437040119844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4340f0f5-d482-4b1a-8d9e-4461027478f5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.040612579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6cd85fe-0187-4385-b114-383fd4dc45c2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.040691119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6cd85fe-0187-4385-b114-383fd4dc45c2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.040886238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6cd85fe-0187-4385-b114-383fd4dc45c2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.072327904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e050bd3-45e6-44da-95b8-2cc1a7ba3069 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.072444421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e050bd3-45e6-44da-95b8-2cc1a7ba3069 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.073199354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10c3b839-7e06-41e4-b873-4e6ca252cd34 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.073602669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723437073582336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10c3b839-7e06-41e4-b873-4e6ca252cd34 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.074150139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adc159df-ef52-4a7a-8003-b8a6c6bf2068 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.074205814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adc159df-ef52-4a7a-8003-b8a6c6bf2068 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:30:37 default-k8s-diff-port-036892 crio[715]: time="2024-11-04 12:30:37.074446149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722116245829761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2daa9e013a548a6a85a13d6376c8f84998afdea5203603471083f9888dd28723,PodSandboxId:715148c45c3ccdb0ca8f9eb3afec309ea7e06c18aa5e22c8cc1026dac37e6e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722095455175945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddc847de-e4e6-4c3d-b91d-835709a0fc1e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1,PodSandboxId:bdd6613591b3ac6bdb8f3bc3145cd2f9f793f9a128c14c90da944eea288da25b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722093219795018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zw2tv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce75a4-f051-4014-9ed0-7b275ea940a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4,PodSandboxId:a0029a9d0f6992e93adae3e3901e285958292ff56d2ea538267b1812f994cdb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722085454886824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2srm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9450cebd-a
efb-4f1a-bb99-7d1dab054dd7,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823,PodSandboxId:63dde0eedfb8d2dc8f1fef3fbb14464b019df60274a7b6baadc8d57e687012cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722085420997112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18745f89-fc15-4a4c-b68b
-7e80cd4f393b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07,PodSandboxId:da5c364a1d9a4546aad1aa3a3846f63c091adaa50442c5400adac188a78360ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722081010046880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d35e6b1145643d0efcfc
d4f272e0a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a,PodSandboxId:a09666f80e3ece07b2519ef7517aa8ae9e7635c0a74127c95d2f2e28e7f92431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722080992915019,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8278064e03f128ec447844
a988b7d9b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e,PodSandboxId:b461843050d213d7949ade519775a62037be2b31ff8de72478643015d7f9c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722080983525413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5e254e23fc4144569eb1973ac1dd1e60,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7,PodSandboxId:35aa8150d803368ef95b4a27e05df9c96245cdbcc529ead202eeade3475dda06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722081002731797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-036892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c538bf12a0f213511743ecaca4b746
f1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adc159df-ef52-4a7a-8003-b8a6c6bf2068 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e9ecf7280a07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   63dde0eedfb8d       storage-provisioner
	2daa9e013a548       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   715148c45c3cc       busybox
	51442200af1bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      22 minutes ago      Running             coredns                   1                   bdd6613591b3a       coredns-7c65d6cfc9-zw2tv
	9e60ae78d5610       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      22 minutes ago      Running             kube-proxy                1                   a0029a9d0f699       kube-proxy-j2srm
	f8d8096ede6a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   63dde0eedfb8d       storage-provisioner
	c33ea99d25624       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      22 minutes ago      Running             kube-scheduler            1                   da5c364a1d9a4       kube-scheduler-default-k8s-diff-port-036892
	1bc906f9e4e94       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   35aa8150d8033       etcd-default-k8s-diff-port-036892
	2e1787441f88b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      22 minutes ago      Running             kube-apiserver            1                   a09666f80e3ec       kube-apiserver-default-k8s-diff-port-036892
	1346cefb50594       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      22 minutes ago      Running             kube-controller-manager   1                   b461843050d21       kube-controller-manager-default-k8s-diff-port-036892
	
	
	==> coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52348 - 51558 "HINFO IN 9177553418246579717.8006546208789792964. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.071307243s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-036892
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-036892
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=default-k8s-diff-port-036892
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T12_01_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 12:01:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-036892
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 12:30:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 12:29:01 +0000   Mon, 04 Nov 2024 12:01:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 12:29:01 +0000   Mon, 04 Nov 2024 12:01:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 12:29:01 +0000   Mon, 04 Nov 2024 12:01:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 12:29:01 +0000   Mon, 04 Nov 2024 12:08:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.130
	  Hostname:    default-k8s-diff-port-036892
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d4dd104d5b64dbfb562ff8f868b347e
	  System UUID:                6d4dd104-d5b6-4dbf-b562-ff8f868b347e
	  Boot ID:                    e89b510a-06e9-4ef5-83b8-ce13092721c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-zw2tv                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-036892                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-036892             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-036892    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-j2srm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-036892             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-2wl94                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-036892 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-036892 event: Registered Node default-k8s-diff-port-036892 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-036892 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-036892 event: Registered Node default-k8s-diff-port-036892 in Controller
	
	
	==> dmesg <==
	[Nov 4 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.046872] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038833] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.888579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.778977] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.417030] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.067376] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058345] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064977] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.172794] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.155941] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.301994] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.279220] systemd-fstab-generator[797]: Ignoring "noauto" option for root device
	[  +0.062203] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.739363] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[Nov 4 12:08] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.942612] systemd-fstab-generator[1531]: Ignoring "noauto" option for root device
	[  +3.766955] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.875778] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] <==
	{"level":"info","ts":"2024-11-04T12:28:02.839751Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2000551304,"revision":1344,"compact-revision":1100}
	{"level":"info","ts":"2024-11-04T12:28:59.563862Z","caller":"traceutil/trace.go:171","msg":"trace[471217728] transaction","detail":"{read_only:false; response_revision:1632; number_of_response:1; }","duration":"272.119327ms","start":"2024-11-04T12:28:59.291724Z","end":"2024-11-04T12:28:59.563843Z","steps":["trace[471217728] 'process raft request'  (duration: 272.009566ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T12:28:59.564072Z","caller":"traceutil/trace.go:171","msg":"trace[1041434081] linearizableReadLoop","detail":"{readStateIndex:1924; appliedIndex:1924; }","duration":"257.950612ms","start":"2024-11-04T12:28:59.306099Z","end":"2024-11-04T12:28:59.564050Z","steps":["trace[1041434081] 'read index received'  (duration: 257.81854ms)","trace[1041434081] 'applied index is now lower than readState.Index'  (duration: 130.588µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T12:28:59.564256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.138086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:28:59.565109Z","caller":"traceutil/trace.go:171","msg":"trace[1193471370] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1632; }","duration":"259.002856ms","start":"2024-11-04T12:28:59.306095Z","end":"2024-11-04T12:28:59.565098Z","steps":["trace[1193471370] 'agreement among raft nodes before linearized reading'  (duration: 258.065282ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:28:59.819761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.203591ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14102057923079971000 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-036892\" mod_revision:1625 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-036892\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-036892\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-04T12:28:59.819909Z","caller":"traceutil/trace.go:171","msg":"trace[122216368] linearizableReadLoop","detail":"{readStateIndex:1925; appliedIndex:1924; }","duration":"255.754568ms","start":"2024-11-04T12:28:59.564143Z","end":"2024-11-04T12:28:59.819898Z","steps":["trace[122216368] 'read index received'  (duration: 127.235559ms)","trace[122216368] 'applied index is now lower than readState.Index'  (duration: 128.517669ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-04T12:28:59.820084Z","caller":"traceutil/trace.go:171","msg":"trace[462335977] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"407.544567ms","start":"2024-11-04T12:28:59.412532Z","end":"2024-11-04T12:28:59.820076Z","steps":["trace[462335977] 'process raft request'  (duration: 278.965061ms)","trace[462335977] 'compare'  (duration: 128.117592ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T12:28:59.820168Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:28:59.412513Z","time spent":"407.616273ms","remote":"127.0.0.1:37476","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-036892\" mod_revision:1625 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-036892\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-036892\" > >"}
	{"level":"warn","ts":"2024-11-04T12:28:59.820311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.722482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:28:59.820351Z","caller":"traceutil/trace.go:171","msg":"trace[230874493] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:1633; }","duration":"429.76483ms","start":"2024-11-04T12:28:59.390580Z","end":"2024-11-04T12:28:59.820345Z","steps":["trace[230874493] 'agreement among raft nodes before linearized reading'  (duration: 429.698418ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:28:59.820453Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-04T12:28:59.390547Z","time spent":"429.834663ms","remote":"127.0.0.1:37716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":0,"response size":28,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true "}
	{"level":"warn","ts":"2024-11-04T12:28:59.820608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.247684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:28:59.820659Z","caller":"traceutil/trace.go:171","msg":"trace[878491871] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1633; }","duration":"254.296972ms","start":"2024-11-04T12:28:59.566354Z","end":"2024-11-04T12:28:59.820651Z","steps":["trace[878491871] 'agreement among raft nodes before linearized reading'  (duration: 254.234645ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-04T12:29:01.378026Z","caller":"traceutil/trace.go:171","msg":"trace[1429986060] transaction","detail":"{read_only:false; response_revision:1635; number_of_response:1; }","duration":"161.783568ms","start":"2024-11-04T12:29:01.216229Z","end":"2024-11-04T12:29:01.378013Z","steps":["trace[1429986060] 'process raft request'  (duration: 161.474933ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:29:50.133483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.967354ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:29:50.133575Z","caller":"traceutil/trace.go:171","msg":"trace[86880137] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1675; }","duration":"126.070498ms","start":"2024-11-04T12:29:50.007489Z","end":"2024-11-04T12:29:50.133559Z","steps":["trace[86880137] 'range keys from in-memory index tree'  (duration: 125.9549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:29:50.133727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.489136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:29:50.133819Z","caller":"traceutil/trace.go:171","msg":"trace[2003271172] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1675; }","duration":"110.58977ms","start":"2024-11-04T12:29:50.023216Z","end":"2024-11-04T12:29:50.133806Z","steps":["trace[2003271172] 'range keys from in-memory index tree'  (duration: 110.442754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:29:51.134147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.35407ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14102057923079971304 > lease_revoke:<id:43b492f7132a2191>","response":"size:28"}
	{"level":"info","ts":"2024-11-04T12:29:51.134234Z","caller":"traceutil/trace.go:171","msg":"trace[1489030830] linearizableReadLoop","detail":"{readStateIndex:1979; appliedIndex:1978; }","duration":"110.98094ms","start":"2024-11-04T12:29:51.023241Z","end":"2024-11-04T12:29:51.134222Z","steps":["trace[1489030830] 'read index received'  (duration: 10.364044ms)","trace[1489030830] 'applied index is now lower than readState.Index'  (duration: 100.616071ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-04T12:29:51.134316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.06292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:29:51.134345Z","caller":"traceutil/trace.go:171","msg":"trace[738663889] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1676; }","duration":"111.102689ms","start":"2024-11-04T12:29:51.023236Z","end":"2024-11-04T12:29:51.134339Z","steps":["trace[738663889] 'agreement among raft nodes before linearized reading'  (duration: 111.044531ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-04T12:29:51.134587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.245473ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-04T12:29:51.134621Z","caller":"traceutil/trace.go:171","msg":"trace[1839355732] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1676; }","duration":"100.283528ms","start":"2024-11-04T12:29:51.034332Z","end":"2024-11-04T12:29:51.134615Z","steps":["trace[1839355732] 'agreement among raft nodes before linearized reading'  (duration: 100.232263ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:30:37 up 22 min,  0 users,  load average: 0.10, 0.13, 0.09
	Linux default-k8s-diff-port-036892 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] <==
	I1104 12:26:05.056673       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:26:05.056746       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:28:04.056453       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:28:04.056522       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1104 12:28:05.058852       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:28:05.058950       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1104 12:28:05.059017       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:28:05.059045       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1104 12:28:05.060089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:28:05.060177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:29:05.060903       1 handler_proxy.go:99] no RequestInfo found in the context
	W1104 12:29:05.060972       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:29:05.061121       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1104 12:29:05.061231       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:29:05.062322       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:29:05.062374       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] <==
	E1104 12:25:07.856132       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:25:08.330318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:25:37.864037       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:25:38.336491       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:26:07.870952       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:26:08.345927       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:26:37.876244       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:26:38.352925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:27:07.882304       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:27:08.361251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:27:37.891085       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:27:38.367514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:28:07.897120       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:28:08.374544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:28:37.903268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:28:38.382166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:29:01.381290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-036892"
	E1104 12:29:07.909101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:29:08.389509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:29:15.066153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="150.825µs"
	I1104 12:29:27.061796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.684µs"
	E1104 12:29:37.915454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:29:38.396299       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:30:07.921020       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:30:08.403123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 12:08:05.657346       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 12:08:05.674182       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.130"]
	E1104 12:08:05.674247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 12:08:05.747188       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 12:08:05.747217       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 12:08:05.747239       1 server_linux.go:169] "Using iptables Proxier"
	I1104 12:08:05.749572       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 12:08:05.749987       1 server.go:483] "Version info" version="v1.31.2"
	I1104 12:08:05.750203       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:05.752623       1 config.go:328] "Starting node config controller"
	I1104 12:08:05.752682       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 12:08:05.754355       1 config.go:199] "Starting service config controller"
	I1104 12:08:05.754378       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 12:08:05.754431       1 config.go:105] "Starting endpoint slice config controller"
	I1104 12:08:05.754436       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 12:08:05.853324       1 shared_informer.go:320] Caches are synced for node config
	I1104 12:08:05.854480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 12:08:05.854524       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] <==
	I1104 12:08:01.910029       1 serving.go:386] Generated self-signed cert in-memory
	W1104 12:08:04.014787       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 12:08:04.014824       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 12:08:04.014834       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 12:08:04.014840       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 12:08:04.075552       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1104 12:08:04.075620       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:04.077999       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1104 12:08:04.078120       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 12:08:04.078156       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 12:08:04.078170       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1104 12:08:04.179044       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 12:29:27 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:27.048338     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:29:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:30.344839     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723370344548174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:29:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:30.344874     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723370344548174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:29:39 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:39.049687     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:29:40 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:40.346051     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723380345489688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:29:40 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:40.346149     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723380345489688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:29:50 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:50.347256     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723390347039234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:29:50 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:50.347280     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723390347039234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:29:52 default-k8s-diff-port-036892 kubelet[926]: E1104 12:29:52.050381     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:30:00 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:00.076958     926 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 12:30:00 default-k8s-diff-port-036892 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 12:30:00 default-k8s-diff-port-036892 kubelet[926]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 12:30:00 default-k8s-diff-port-036892 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 12:30:00 default-k8s-diff-port-036892 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 12:30:00 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:00.348693     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723400348286049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:30:00 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:00.348718     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723400348286049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:30:07 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:07.049198     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:30:10 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:10.350894     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723410349959470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:30:10 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:10.350929     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723410349959470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:30:19 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:19.048763     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:30:20 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:20.352308     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723420351731194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:30:20 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:20.352946     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723420351731194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:30:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:30.049482     926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2wl94" podUID="7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d"
	Nov 04 12:30:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:30.355885     926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723430354503706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:30:30 default-k8s-diff-port-036892 kubelet[926]: E1104 12:30:30.355973     926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723430354503706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] <==
	I1104 12:08:36.317612       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 12:08:36.326812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 12:08:36.326926       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 12:08:53.724287       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 12:08:53.724511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-036892_8faba631-96c2-45db-944a-7948a126e32b!
	I1104 12:08:53.725980       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36bca52f-4741-4cfb-b07f-d82a6fe85686", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-036892_8faba631-96c2-45db-944a-7948a126e32b became leader
	I1104 12:08:53.825109       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-036892_8faba631-96c2-45db-944a-7948a126e32b!
	
	
	==> storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] <==
	I1104 12:08:05.525458       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1104 12:08:35.529117       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2wl94
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 describe pod metrics-server-6867b74b74-2wl94
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-036892 describe pod metrics-server-6867b74b74-2wl94: exit status 1 (59.874586ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2wl94" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-036892 describe pod metrics-server-6867b74b74-2wl94: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (364s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-908370 -n no-preload-908370
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-11-04 12:28:28.858923931 +0000 UTC m=+6700.213982708
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-908370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-908370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.249µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-908370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-908370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-908370 logs -n 25: (1.185729113s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC | 04 Nov 24 12:28 UTC |
	| start   | -p newest-cni-374564 --memory=2200 --alsologtostderr   | newest-cni-374564            | jenkins | v1.34.0 | 04 Nov 24 12:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:28:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:28:26.031348   93099 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:28:26.031583   93099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:28:26.031592   93099 out.go:358] Setting ErrFile to fd 2...
	I1104 12:28:26.031596   93099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:28:26.031816   93099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:28:26.032390   93099 out.go:352] Setting JSON to false
	I1104 12:28:26.033458   93099 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11457,"bootTime":1730711849,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:28:26.033558   93099 start.go:139] virtualization: kvm guest
	I1104 12:28:26.035932   93099 out.go:177] * [newest-cni-374564] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:28:26.037153   93099 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:28:26.037149   93099 notify.go:220] Checking for updates...
	I1104 12:28:26.039600   93099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:28:26.040638   93099 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:28:26.041742   93099 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:28:26.042804   93099 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:28:26.043976   93099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:28:26.045588   93099 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:28:26.045710   93099 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:28:26.045845   93099 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:28:26.045960   93099 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:28:26.084159   93099 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 12:28:26.085469   93099 start.go:297] selected driver: kvm2
	I1104 12:28:26.085485   93099 start.go:901] validating driver "kvm2" against <nil>
	I1104 12:28:26.085500   93099 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:28:26.086349   93099 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:28:26.086436   93099 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:28:26.104641   93099 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:28:26.104690   93099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1104 12:28:26.104728   93099 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1104 12:28:26.104996   93099 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1104 12:28:26.105035   93099 cni.go:84] Creating CNI manager for ""
	I1104 12:28:26.105067   93099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:28:26.105076   93099 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 12:28:26.105130   93099 start.go:340] cluster config:
	{Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:28:26.105247   93099 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:28:26.107718   93099 out.go:177] * Starting "newest-cni-374564" primary control-plane node in "newest-cni-374564" cluster
	I1104 12:28:26.108868   93099 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:28:26.108906   93099 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:28:26.108913   93099 cache.go:56] Caching tarball of preloaded images
	I1104 12:28:26.108980   93099 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:28:26.108991   93099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:28:26.109069   93099 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/config.json ...
	I1104 12:28:26.109085   93099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/newest-cni-374564/config.json: {Name:mke6f417518eaaf58f73c80ff80519f51eb2dc8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:28:26.109274   93099 start.go:360] acquireMachinesLock for newest-cni-374564: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:28:26.109324   93099 start.go:364] duration metric: took 27.554µs to acquireMachinesLock for "newest-cni-374564"
	I1104 12:28:26.109349   93099 start.go:93] Provisioning new machine with config: &{Name:newest-cni-374564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-374564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:28:26.109443   93099 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.510823342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723309510796002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c14886e2-ad5d-4291-9172-88238a81b45d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.511590750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=022e517d-48e4-4dbd-8cff-4f70e976555f name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.511643428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=022e517d-48e4-4dbd-8cff-4f70e976555f name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.511828007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=022e517d-48e4-4dbd-8cff-4f70e976555f name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.555606487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a64d78b3-ffce-480b-bb35-8003a7ecd108 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.555678307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a64d78b3-ffce-480b-bb35-8003a7ecd108 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.556731753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2ab7026-6cd1-40fc-b570-9ac3e07f5060 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.557119930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723309557087162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2ab7026-6cd1-40fc-b570-9ac3e07f5060 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.557959600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abc72a57-5dae-46d7-a3d4-14677043dce4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.558017504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abc72a57-5dae-46d7-a3d4-14677043dce4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.558202363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abc72a57-5dae-46d7-a3d4-14677043dce4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.601238825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1bca660d-5b2f-4e53-a4e8-e2eb91354bb3 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.601313837Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1bca660d-5b2f-4e53-a4e8-e2eb91354bb3 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.602764517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60bacd60-b5b4-406b-a618-7909674117f4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.603490411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723309603453955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60bacd60-b5b4-406b-a618-7909674117f4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.604110471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e75eac0-cde1-4b96-8416-dd922dd7cce7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.604171654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e75eac0-cde1-4b96-8416-dd922dd7cce7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.604350806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e75eac0-cde1-4b96-8416-dd922dd7cce7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.644778943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce05cbaf-627d-4c92-b24e-30bc133807cf name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.644855673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce05cbaf-627d-4c92-b24e-30bc133807cf name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.647114877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7961469-b382-402d-bbc0-99e43899c8ca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.647542573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723309647506202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7961469-b382-402d-bbc0-99e43899c8ca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.648123576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cac1817-8877-4e74-a215-2d1aac9206f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.648190259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cac1817-8877-4e74-a215-2d1aac9206f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:29 no-preload-908370 crio[703]: time="2024-11-04 12:28:29.648461388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730722166447073472,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88fa3ae4d36b499a8d6f18f4cca6442025a510017fc7729008bfb5b56c39cb5,PodSandboxId:0d05f2ac4365063d3cd2710a12624b520de2ef9d8bd085bfb67cba38c30a3906,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730722145461257501,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 211134d2-72ed-4243-818e-81755db54f57,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de,PodSandboxId:7933cfebeb6afe3bb96349152367107d7427b22832bafb4f648d56a3df845af5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730722143333511955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vv4kq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2518f86-9653-4e98-9193-9d2a76838117,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3,PodSandboxId:9941f6065c0062fac156e7d39c07019811475186bb9a9ca02516002a86c0156f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730722135746244903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9hbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d494697-ff2b-4600-9c
11-b704de9be2a3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d,PodSandboxId:71b9c2ed6c6e155981398f1b0e2ea01fe6fa1e090814ec2859b6f705b8703c7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730722135692369603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11c9416-6236-4c81-9626-d5e040acea
8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82,PodSandboxId:033e135e95f2c7e1d82f90fb383c167b1a8dfd9f6624e30379e16e9f5075de0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730722130930823542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eff80bc42a9693bbf2b1daa559d69a2,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea,PodSandboxId:1ea43d435da914e034af9d2d37c4d064ab7aa027ee415bed08eecf36ccb3f1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730722130932750428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ac9ab9ae348d75e1aa7bf64e83b0e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd,PodSandboxId:52f547f09dd1b9e4463cc131cde74a2fc68c6f42c8bdf3623a262a6a879f2c71,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730722130884363361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9dfac04069601a52c15f5a2321bfff,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456,PodSandboxId:c02100f7b4561243c0f92a52bd9ef84896df70a17b0f0f7b3c0b0f155571d8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730722130878690593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-908370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f32f53f7238f9b51ee01846536440c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cac1817-8877-4e74-a215-2d1aac9206f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4f6c824f92ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   71b9c2ed6c6e1       storage-provisioner
	d88fa3ae4d36b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   0d05f2ac43650       busybox
	6dcd134432963       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   7933cfebeb6af       coredns-7c65d6cfc9-vv4kq
	33418a9cb2f8a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      19 minutes ago      Running             kube-proxy                1                   9941f6065c006       kube-proxy-w9hbz
	162e3330ff77f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   71b9c2ed6c6e1       storage-provisioner
	e74398c77b3ca       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      19 minutes ago      Running             kube-apiserver            1                   1ea43d435da91       kube-apiserver-no-preload-908370
	1390676564c7e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   033e135e95f2c       etcd-no-preload-908370
	9c3fa7870c724       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      19 minutes ago      Running             kube-controller-manager   1                   52f547f09dd1b       kube-controller-manager-no-preload-908370
	5546d06c4d51e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      19 minutes ago      Running             kube-scheduler            1                   c02100f7b4561       kube-scheduler-no-preload-908370
	
	
	==> coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57837 - 23655 "HINFO IN 6065787258555663794.2382023106679684931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045711979s
	
	
	==> describe nodes <==
	Name:               no-preload-908370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-908370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c03dd974d73f8853a1a57928c124797a5ae24dc4
	                    minikube.k8s.io/name=no-preload-908370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_04T11_59_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Nov 2024 11:59:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-908370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Nov 2024 12:28:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Nov 2024 12:24:43 +0000   Mon, 04 Nov 2024 11:59:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Nov 2024 12:24:43 +0000   Mon, 04 Nov 2024 11:59:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Nov 2024 12:24:43 +0000   Mon, 04 Nov 2024 11:59:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Nov 2024 12:24:43 +0000   Mon, 04 Nov 2024 12:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.91
	  Hostname:    no-preload-908370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3c247408ace4ec48e7dca6349f98e18
	  System UUID:                d3c24740-8ace-4ec4-8e7d-ca6349f98e18
	  Boot ID:                    8b562791-7b0f-4c3e-8b7e-0c9c5aabd773
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-vv4kq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-908370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-908370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-908370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-w9hbz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-908370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-2lxlg              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-908370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-908370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-908370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-908370 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-908370 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-908370 event: Registered Node no-preload-908370 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-908370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-908370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-908370 event: Registered Node no-preload-908370 in Controller
	
	
	==> dmesg <==
	[Nov 4 12:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049425] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038929] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.132883] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.838836] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.538809] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.085643] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.058621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070153] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.200310] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.096820] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.259802] systemd-fstab-generator[694]: Ignoring "noauto" option for root device
	[ +15.224691] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.059593] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.527010] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +3.769282] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.695492] systemd-fstab-generator[2054]: Ignoring "noauto" option for root device
	[Nov 4 12:09] kauditd_printk_skb: 61 callbacks suppressed
	[ +25.176073] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] <==
	{"level":"info","ts":"2024-11-04T12:08:51.621518Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da56312a125ec6d7","local-member-id":"43b28b444dd15097","added-peer-id":"43b28b444dd15097","added-peer-peer-urls":["https://192.168.61.91:2380"]}
	{"level":"info","ts":"2024-11-04T12:08:51.621663Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da56312a125ec6d7","local-member-id":"43b28b444dd15097","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-04T12:08:51.621712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-04T12:08:53.263462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 is starting a new election at term 2"}
	{"level":"info","ts":"2024-11-04T12:08:53.263521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-04T12:08:53.263559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 received MsgPreVoteResp from 43b28b444dd15097 at term 2"}
	{"level":"info","ts":"2024-11-04T12:08:53.263573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 became candidate at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.263578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 received MsgVoteResp from 43b28b444dd15097 at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.263587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"43b28b444dd15097 became leader at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.263594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 43b28b444dd15097 elected leader 43b28b444dd15097 at term 3"}
	{"level":"info","ts":"2024-11-04T12:08:53.280988Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"43b28b444dd15097","local-member-attributes":"{Name:no-preload-908370 ClientURLs:[https://192.168.61.91:2379]}","request-path":"/0/members/43b28b444dd15097/attributes","cluster-id":"da56312a125ec6d7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-04T12:08:53.280999Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T12:08:53.281178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-04T12:08:53.281568Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-04T12:08:53.281623Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-04T12:08:53.282194Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-04T12:08:53.282204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-04T12:08:53.283377Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.91:2379"}
	{"level":"info","ts":"2024-11-04T12:08:53.283977Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-04T12:18:53.313861Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2024-11-04T12:18:53.322016Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":851,"took":"7.699676ms","hash":2669187108,"current-db-size-bytes":2678784,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2678784,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-11-04T12:18:53.322106Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2669187108,"revision":851,"compact-revision":-1}
	{"level":"info","ts":"2024-11-04T12:23:53.323719Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1094}
	{"level":"info","ts":"2024-11-04T12:23:53.326822Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1094,"took":"2.878825ms","hash":2024641182,"current-db-size-bytes":2678784,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-11-04T12:23:53.326867Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2024641182,"revision":1094,"compact-revision":851}
	
	
	==> kernel <==
	 12:28:29 up 20 min,  0 users,  load average: 0.01, 0.10, 0.13
	Linux no-preload-908370 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1104 12:23:55.569541       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:23:55.569615       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1104 12:23:55.570650       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:23:55.570744       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:24:55.570883       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:24:55.570941       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1104 12:24:55.570900       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:24:55.571029       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:24:55.572156       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:24:55.572189       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1104 12:26:55.572564       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:26:55.572614       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1104 12:26:55.572584       1 handler_proxy.go:99] no RequestInfo found in the context
	E1104 12:26:55.572785       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1104 12:26:55.573748       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1104 12:26:55.574824       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] <==
	E1104 12:23:28.237078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:23:28.728624       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:23:58.243126       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:23:58.736570       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:24:28.248996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:24:28.747379       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:24:43.306382       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-908370"
	E1104 12:24:58.254491       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:24:58.755042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:25:19.289014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="86.388µs"
	E1104 12:25:28.259309       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:25:28.762752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1104 12:25:34.288559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="46.288µs"
	E1104 12:25:58.265318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:25:58.771014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:26:28.270184       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:26:28.777925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:26:58.278344       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:26:58.786219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:27:28.283892       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:27:28.793535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:27:58.289700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:27:58.801927       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1104 12:28:28.295462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1104 12:28:28.809137       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1104 12:08:55.968629       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1104 12:08:55.978964       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.91"]
	E1104 12:08:55.979023       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1104 12:08:56.040533       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1104 12:08:56.040616       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1104 12:08:56.040653       1 server_linux.go:169] "Using iptables Proxier"
	I1104 12:08:56.044549       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1104 12:08:56.045493       1 server.go:483] "Version info" version="v1.31.2"
	I1104 12:08:56.045581       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:56.053060       1 config.go:199] "Starting service config controller"
	I1104 12:08:56.053128       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1104 12:08:56.053166       1 config.go:105] "Starting endpoint slice config controller"
	I1104 12:08:56.053182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1104 12:08:56.054946       1 config.go:328] "Starting node config controller"
	I1104 12:08:56.054997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1104 12:08:56.153617       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1104 12:08:56.153738       1 shared_informer.go:320] Caches are synced for service config
	I1104 12:08:56.155156       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] <==
	I1104 12:08:52.195934       1 serving.go:386] Generated self-signed cert in-memory
	W1104 12:08:54.474953       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1104 12:08:54.475050       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1104 12:08:54.475063       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1104 12:08:54.475070       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1104 12:08:54.607183       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1104 12:08:54.607212       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1104 12:08:54.609870       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1104 12:08:54.609986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1104 12:08:54.610207       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1104 12:08:54.610223       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1104 12:08:54.711129       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 04 12:27:18 no-preload-908370 kubelet[1425]: E1104 12:27:18.274026    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:27:20 no-preload-908370 kubelet[1425]: E1104 12:27:20.516903    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723240516284298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:20 no-preload-908370 kubelet[1425]: E1104 12:27:20.519005    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723240516284298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:30 no-preload-908370 kubelet[1425]: E1104 12:27:30.521023    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723250520537355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:30 no-preload-908370 kubelet[1425]: E1104 12:27:30.521277    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723250520537355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:33 no-preload-908370 kubelet[1425]: E1104 12:27:33.273637    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:27:40 no-preload-908370 kubelet[1425]: E1104 12:27:40.522360    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723260522113780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:40 no-preload-908370 kubelet[1425]: E1104 12:27:40.522433    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723260522113780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:45 no-preload-908370 kubelet[1425]: E1104 12:27:45.272744    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:27:50 no-preload-908370 kubelet[1425]: E1104 12:27:50.296971    1425 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 04 12:27:50 no-preload-908370 kubelet[1425]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 04 12:27:50 no-preload-908370 kubelet[1425]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 04 12:27:50 no-preload-908370 kubelet[1425]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 04 12:27:50 no-preload-908370 kubelet[1425]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 04 12:27:50 no-preload-908370 kubelet[1425]: E1104 12:27:50.527365    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723270523377435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:50 no-preload-908370 kubelet[1425]: E1104 12:27:50.527416    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723270523377435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:27:58 no-preload-908370 kubelet[1425]: E1104 12:27:58.272841    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:28:00 no-preload-908370 kubelet[1425]: E1104 12:28:00.530864    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723280530380729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:00 no-preload-908370 kubelet[1425]: E1104 12:28:00.530907    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723280530380729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:10 no-preload-908370 kubelet[1425]: E1104 12:28:10.532319    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723290531911607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:10 no-preload-908370 kubelet[1425]: E1104 12:28:10.532909    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723290531911607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:11 no-preload-908370 kubelet[1425]: E1104 12:28:11.273470    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	Nov 04 12:28:20 no-preload-908370 kubelet[1425]: E1104 12:28:20.534227    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723300533983286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:20 no-preload-908370 kubelet[1425]: E1104 12:28:20.534264    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723300533983286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 04 12:28:24 no-preload-908370 kubelet[1425]: E1104 12:28:24.273912    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lxlg" podUID="bf328856-ad19-47b3-a40d-282cd4fdec4b"
	
	
	==> storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] <==
	I1104 12:08:55.798706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1104 12:09:25.811449       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] <==
	I1104 12:09:26.520284       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1104 12:09:26.529035       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1104 12:09:26.529216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1104 12:09:26.536744       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1104 12:09:26.537237       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4c3f43b-8157-4af6-9328-9b01a4a9eade", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-908370_3c50268b-57e2-4975-98d9-556c4271abb3 became leader
	I1104 12:09:26.537311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-908370_3c50268b-57e2-4975-98d9-556c4271abb3!
	I1104 12:09:26.637903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-908370_3c50268b-57e2-4975-98d9-556c4271abb3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-908370 -n no-preload-908370
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-908370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2lxlg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-908370 describe pod metrics-server-6867b74b74-2lxlg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-908370 describe pod metrics-server-6867b74b74-2lxlg: exit status 1 (74.220004ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2lxlg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-908370 describe pod metrics-server-6867b74b74-2lxlg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (364.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (175.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:25:50.827845   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:26:33.164790   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:26:43.769379   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:27:46.501105   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:28:01.267133   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
E1104 12:28:20.019744   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.180:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (239.088603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-589257" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-589257 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-589257 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.017µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-589257 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (233.319704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-589257 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-589257 logs -n 25: (1.516184298s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo cat                              | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo                                  | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo find                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-528108 sudo crio                             | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-528108                                       | calico-528108                | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457408 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:00 UTC |
	|         | disable-driver-mounts-457408                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:00 UTC | 04 Nov 24 12:01 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-036892  | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC | 04 Nov 24 12:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:01 UTC |                     |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-908370                  | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-908370                                   | no-preload-908370            | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-325116                 | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-589257        | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-325116                                  | embed-certs-325116           | jenkins | v1.34.0 | 04 Nov 24 12:02 UTC | 04 Nov 24 12:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-036892       | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-036892 | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:12 UTC |
	|         | default-k8s-diff-port-036892                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-589257             | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC | 04 Nov 24 12:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-589257                              | old-k8s-version-589257       | jenkins | v1.34.0 | 04 Nov 24 12:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 12:04:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 12:04:21.684777   86402 out.go:345] Setting OutFile to fd 1 ...
	I1104 12:04:21.684885   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.684893   86402 out.go:358] Setting ErrFile to fd 2...
	I1104 12:04:21.684897   86402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 12:04:21.685085   86402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 12:04:21.685618   86402 out.go:352] Setting JSON to false
	I1104 12:04:21.686501   86402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10013,"bootTime":1730711849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 12:04:21.686603   86402 start.go:139] virtualization: kvm guest
	I1104 12:04:21.688652   86402 out.go:177] * [old-k8s-version-589257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 12:04:21.690121   86402 notify.go:220] Checking for updates...
	I1104 12:04:21.690173   86402 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 12:04:21.691712   86402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 12:04:21.693100   86402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:04:21.694334   86402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 12:04:21.695431   86402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 12:04:21.696680   86402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 12:04:21.698271   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:04:21.698697   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.698738   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.713382   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I1104 12:04:21.713861   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.714357   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.714378   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.714696   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.714872   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.716711   86402 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1104 12:04:21.718136   86402 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 12:04:21.718573   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:04:21.718617   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:04:21.733074   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1104 12:04:21.733525   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:04:21.733939   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:04:21.733955   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:04:21.734252   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:04:21.734410   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:04:21.770049   86402 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 12:04:21.771735   86402 start.go:297] selected driver: kvm2
	I1104 12:04:21.771748   86402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.771851   86402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 12:04:21.772615   86402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.772709   86402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 12:04:21.787662   86402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 12:04:21.788158   86402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:04:21.788201   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:04:21.788238   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:04:21.788282   86402 start.go:340] cluster config:
	{Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:04:21.788422   86402 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 12:04:21.790364   86402 out.go:177] * Starting "old-k8s-version-589257" primary control-plane node in "old-k8s-version-589257" cluster
	I1104 12:04:20.849476   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:20.393451   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:04:20.393484   86301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:20.393492   86301 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:20.393580   86301 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:20.393594   86301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1104 12:04:20.393670   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:04:20.393863   86301 start.go:360] acquireMachinesLock for default-k8s-diff-port-036892: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:21.791568   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:04:21.791599   86402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 12:04:21.791608   86402 cache.go:56] Caching tarball of preloaded images
	I1104 12:04:21.791668   86402 preload.go:172] Found /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1104 12:04:21.791678   86402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1104 12:04:21.791755   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:04:21.791918   86402 start.go:360] acquireMachinesLock for old-k8s-version-589257: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:04:26.929512   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:30.001546   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:36.081486   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:39.153496   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:45.233535   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:48.305510   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:54.385555   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:04:57.457513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:03.537513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:06.609487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:12.689475   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:15.761508   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:21.841502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:24.913609   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:30.993499   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:34.065502   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:40.145511   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:43.217478   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:49.297518   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:52.369526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:05:58.449509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:01.521498   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:07.601506   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:10.673509   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:16.753487   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:19.825549   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:25.905526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:28.977526   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:35.057466   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:38.129670   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:44.209517   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:47.281541   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:53.361542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:06:56.433564   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:02.513462   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:05.585513   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:11.665480   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:14.737542   85500 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.91:22: connect: no route to host
	I1104 12:07:17.742001   85759 start.go:364] duration metric: took 4m26.438155925s to acquireMachinesLock for "embed-certs-325116"
	I1104 12:07:17.742060   85759 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:17.742068   85759 fix.go:54] fixHost starting: 
	I1104 12:07:17.742418   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:17.742470   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:17.758611   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I1104 12:07:17.759173   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:17.759750   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:17.759774   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:17.760116   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:17.760326   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:17.760498   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:17.762313   85759 fix.go:112] recreateIfNeeded on embed-certs-325116: state=Stopped err=<nil>
	I1104 12:07:17.762335   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	W1104 12:07:17.762469   85759 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:17.764411   85759 out.go:177] * Restarting existing kvm2 VM for "embed-certs-325116" ...
	I1104 12:07:17.739255   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:17.739306   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739691   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:07:17.739718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:07:17.739888   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:07:17.741864   85500 machine.go:96] duration metric: took 4m37.421766695s to provisionDockerMachine
	I1104 12:07:17.741908   85500 fix.go:56] duration metric: took 4m37.442993443s for fixHost
	I1104 12:07:17.741918   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 4m37.443015642s
	W1104 12:07:17.741938   85500 start.go:714] error starting host: provision: host is not running
	W1104 12:07:17.742034   85500 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1104 12:07:17.742044   85500 start.go:729] Will try again in 5 seconds ...
	I1104 12:07:17.765958   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Start
	I1104 12:07:17.766220   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring networks are active...
	I1104 12:07:17.767191   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network default is active
	I1104 12:07:17.767589   85759 main.go:141] libmachine: (embed-certs-325116) Ensuring network mk-embed-certs-325116 is active
	I1104 12:07:17.767984   85759 main.go:141] libmachine: (embed-certs-325116) Getting domain xml...
	I1104 12:07:17.768804   85759 main.go:141] libmachine: (embed-certs-325116) Creating domain...
	I1104 12:07:18.996135   85759 main.go:141] libmachine: (embed-certs-325116) Waiting to get IP...
	I1104 12:07:18.997002   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:18.997542   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:18.997615   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:18.997513   87021 retry.go:31] will retry after 239.606839ms: waiting for machine to come up
	I1104 12:07:19.239054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.239579   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.239602   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.239528   87021 retry.go:31] will retry after 303.459257ms: waiting for machine to come up
	I1104 12:07:19.545134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.545597   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.545633   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.545544   87021 retry.go:31] will retry after 394.511523ms: waiting for machine to come up
	I1104 12:07:19.942226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:19.942607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:19.942630   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:19.942576   87021 retry.go:31] will retry after 381.618515ms: waiting for machine to come up
	I1104 12:07:20.326265   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.326707   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.326738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.326651   87021 retry.go:31] will retry after 584.226748ms: waiting for machine to come up
	I1104 12:07:20.912117   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:20.912575   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:20.912607   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:20.912524   87021 retry.go:31] will retry after 770.080519ms: waiting for machine to come up
	I1104 12:07:22.742250   85500 start.go:360] acquireMachinesLock for no-preload-908370: {Name:mka4ed965095cfb58c9b0f5916edff39b4236a2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1104 12:07:21.684620   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:21.685074   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:21.685103   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:21.685026   87021 retry.go:31] will retry after 1.170018806s: waiting for machine to come up
	I1104 12:07:22.856736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:22.857104   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:22.857132   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:22.857048   87021 retry.go:31] will retry after 1.467304538s: waiting for machine to come up
	I1104 12:07:24.326735   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:24.327197   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:24.327220   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:24.327148   87021 retry.go:31] will retry after 1.676202737s: waiting for machine to come up
	I1104 12:07:26.006035   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:26.006515   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:26.006538   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:26.006460   87021 retry.go:31] will retry after 1.8778328s: waiting for machine to come up
	I1104 12:07:27.886226   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:27.886634   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:27.886656   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:27.886579   87021 retry.go:31] will retry after 2.886548821s: waiting for machine to come up
	I1104 12:07:30.776677   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:30.777080   85759 main.go:141] libmachine: (embed-certs-325116) DBG | unable to find current IP address of domain embed-certs-325116 in network mk-embed-certs-325116
	I1104 12:07:30.777102   85759 main.go:141] libmachine: (embed-certs-325116) DBG | I1104 12:07:30.777039   87021 retry.go:31] will retry after 3.108966144s: waiting for machine to come up
	I1104 12:07:35.049920   86301 start.go:364] duration metric: took 3m14.656022924s to acquireMachinesLock for "default-k8s-diff-port-036892"
	I1104 12:07:35.050007   86301 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:35.050019   86301 fix.go:54] fixHost starting: 
	I1104 12:07:35.050381   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:35.050436   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:35.067928   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I1104 12:07:35.068445   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:35.068953   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:07:35.068976   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:35.069353   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:35.069560   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:35.069692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:07:35.071231   86301 fix.go:112] recreateIfNeeded on default-k8s-diff-port-036892: state=Stopped err=<nil>
	I1104 12:07:35.071252   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	W1104 12:07:35.071401   86301 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:35.073762   86301 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-036892" ...
	I1104 12:07:35.075114   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Start
	I1104 12:07:35.075311   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring networks are active...
	I1104 12:07:35.076105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network default is active
	I1104 12:07:35.076534   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Ensuring network mk-default-k8s-diff-port-036892 is active
	I1104 12:07:35.076946   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Getting domain xml...
	I1104 12:07:35.077641   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Creating domain...
	I1104 12:07:33.887738   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888147   85759 main.go:141] libmachine: (embed-certs-325116) Found IP for machine: 192.168.39.47
	I1104 12:07:33.888176   85759 main.go:141] libmachine: (embed-certs-325116) Reserving static IP address...
	I1104 12:07:33.888206   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has current primary IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.888737   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.888769   85759 main.go:141] libmachine: (embed-certs-325116) DBG | skip adding static IP to network mk-embed-certs-325116 - found existing host DHCP lease matching {name: "embed-certs-325116", mac: "52:54:00:bd:ab:49", ip: "192.168.39.47"}
	I1104 12:07:33.888783   85759 main.go:141] libmachine: (embed-certs-325116) Reserved static IP address: 192.168.39.47
	I1104 12:07:33.888795   85759 main.go:141] libmachine: (embed-certs-325116) Waiting for SSH to be available...
	I1104 12:07:33.888812   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Getting to WaitForSSH function...
	I1104 12:07:33.891130   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891493   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:33.891520   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:33.891670   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH client type: external
	I1104 12:07:33.891693   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa (-rw-------)
	I1104 12:07:33.891732   85759 main.go:141] libmachine: (embed-certs-325116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:33.891748   85759 main.go:141] libmachine: (embed-certs-325116) DBG | About to run SSH command:
	I1104 12:07:33.891773   85759 main.go:141] libmachine: (embed-certs-325116) DBG | exit 0
	I1104 12:07:34.012989   85759 main.go:141] libmachine: (embed-certs-325116) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:34.013457   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetConfigRaw
	I1104 12:07:34.014162   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.016645   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017028   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.017062   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.017347   85759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/config.json ...
	I1104 12:07:34.017577   85759 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:34.017596   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.017824   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.020134   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020416   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.020449   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.020570   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.020745   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.020905   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.021059   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.021313   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.021505   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.021515   85759 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:34.125266   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:34.125305   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125556   85759 buildroot.go:166] provisioning hostname "embed-certs-325116"
	I1104 12:07:34.125583   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.125781   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.128180   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128486   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.128514   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.128603   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.128758   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128890   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.128996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.129166   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.129371   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.129394   85759 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-325116 && echo "embed-certs-325116" | sudo tee /etc/hostname
	I1104 12:07:34.242027   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-325116
	
	I1104 12:07:34.242054   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.244671   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.244984   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.245019   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.245159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.245337   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245514   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.245661   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.245810   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.245971   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.245986   85759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-325116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-325116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-325116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:34.357178   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:34.357204   85759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:34.357220   85759 buildroot.go:174] setting up certificates
	I1104 12:07:34.357241   85759 provision.go:84] configureAuth start
	I1104 12:07:34.357250   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetMachineName
	I1104 12:07:34.357533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:34.359993   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360308   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.360327   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.360533   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.362459   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362750   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.362786   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.362932   85759 provision.go:143] copyHostCerts
	I1104 12:07:34.362986   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:34.363022   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:34.363109   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:34.363231   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:34.363242   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:34.363282   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:34.363357   85759 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:34.363368   85759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:34.363399   85759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:34.363503   85759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.embed-certs-325116 san=[127.0.0.1 192.168.39.47 embed-certs-325116 localhost minikube]
	I1104 12:07:34.453223   85759 provision.go:177] copyRemoteCerts
	I1104 12:07:34.453295   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:34.453317   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.455736   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456022   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.456054   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.456230   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.456406   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.456539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.456631   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.539172   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:34.561889   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:07:34.585111   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:07:34.607449   85759 provision.go:87] duration metric: took 250.195255ms to configureAuth
	I1104 12:07:34.607495   85759 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:34.607809   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:34.607952   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.610672   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611009   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.611032   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.611253   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.611444   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611600   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.611739   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.611917   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.612086   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.612101   85759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:34.823086   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:34.823114   85759 machine.go:96] duration metric: took 805.522353ms to provisionDockerMachine
	I1104 12:07:34.823128   85759 start.go:293] postStartSetup for "embed-certs-325116" (driver="kvm2")
	I1104 12:07:34.823138   85759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:34.823174   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:34.823451   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:34.823489   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.826112   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826453   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.826482   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.826581   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.826756   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.826886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.826998   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:34.907354   85759 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:34.911229   85759 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:34.911246   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:34.911316   85759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:34.911402   85759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:34.911516   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:34.920149   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:34.942468   85759 start.go:296] duration metric: took 119.32654ms for postStartSetup
	I1104 12:07:34.942517   85759 fix.go:56] duration metric: took 17.200448721s for fixHost
	I1104 12:07:34.942540   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:34.945295   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945659   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:34.945685   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:34.945847   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:34.946006   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946173   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:34.946311   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:34.946442   85759 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:34.946583   85759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1104 12:07:34.946592   85759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:35.049767   85759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722055.017047529
	
	I1104 12:07:35.049790   85759 fix.go:216] guest clock: 1730722055.017047529
	I1104 12:07:35.049797   85759 fix.go:229] Guest: 2024-11-04 12:07:35.017047529 +0000 UTC Remote: 2024-11-04 12:07:34.942522008 +0000 UTC m=+283.781167350 (delta=74.525521ms)
	I1104 12:07:35.049829   85759 fix.go:200] guest clock delta is within tolerance: 74.525521ms
	I1104 12:07:35.049834   85759 start.go:83] releasing machines lock for "embed-certs-325116", held for 17.307794416s
	I1104 12:07:35.049859   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.050137   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:35.052845   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053238   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.053269   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.053454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054239   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:35.054337   85759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:35.054383   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.054502   85759 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:35.054539   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:35.057289   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057391   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057733   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057778   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:35.057802   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057820   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:35.057959   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.057996   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:35.058110   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058296   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:35.058316   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:35.058485   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.058658   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:35.134602   85759 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:35.158961   85759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:35.303038   85759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:35.309611   85759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:35.309674   85759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:35.325082   85759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:35.325142   85759 start.go:495] detecting cgroup driver to use...
	I1104 12:07:35.325211   85759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:35.341673   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:35.355506   85759 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:35.355569   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:35.369017   85759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:35.382745   85759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:35.498985   85759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:35.648628   85759 docker.go:233] disabling docker service ...
	I1104 12:07:35.648702   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:35.666912   85759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:35.679786   85759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:35.799284   85759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:35.931842   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:35.945707   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:35.965183   85759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:35.965269   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.975446   85759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:35.975514   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.985968   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:35.996462   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.006840   85759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:36.017174   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.027013   85759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.044572   85759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:36.054046   85759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:36.063355   85759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:36.063399   85759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:36.075157   85759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:36.084713   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:36.205088   85759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:36.299330   85759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:36.299423   85759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:36.304194   85759 start.go:563] Will wait 60s for crictl version
	I1104 12:07:36.304248   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:07:36.308041   85759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:36.349114   85759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:36.349264   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.378677   85759 ssh_runner.go:195] Run: crio --version
	I1104 12:07:36.406751   85759 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:36.335603   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting to get IP...
	I1104 12:07:36.336431   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.336921   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.337007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.336911   87142 retry.go:31] will retry after 289.750795ms: waiting for machine to come up
	I1104 12:07:36.628712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629301   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.629419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.629345   87142 retry.go:31] will retry after 356.596321ms: waiting for machine to come up
	I1104 12:07:36.988173   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988663   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:36.988713   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:36.988626   87142 retry.go:31] will retry after 446.62367ms: waiting for machine to come up
	I1104 12:07:37.437529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438120   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.438174   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.438023   87142 retry.go:31] will retry after 482.072639ms: waiting for machine to come up
	I1104 12:07:37.921514   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922025   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:37.922056   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:37.921983   87142 retry.go:31] will retry after 645.10615ms: waiting for machine to come up
	I1104 12:07:38.569009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569524   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:38.569566   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:38.569432   87142 retry.go:31] will retry after 841.352802ms: waiting for machine to come up
	I1104 12:07:39.412662   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413091   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:39.413112   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:39.413047   87142 retry.go:31] will retry after 878.218722ms: waiting for machine to come up
	I1104 12:07:36.407939   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetIP
	I1104 12:07:36.411021   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411378   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:36.411408   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:36.411599   85759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:36.415528   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:36.427484   85759 kubeadm.go:883] updating cluster {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:36.427616   85759 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:36.427684   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:36.460332   85759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:36.460406   85759 ssh_runner.go:195] Run: which lz4
	I1104 12:07:36.464187   85759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:36.468140   85759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:36.468177   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:37.703067   85759 crio.go:462] duration metric: took 1.238901186s to copy over tarball
	I1104 12:07:37.703136   85759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:39.803761   85759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100578378s)
	I1104 12:07:39.803795   85759 crio.go:469] duration metric: took 2.100697698s to extract the tarball
	I1104 12:07:39.803805   85759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:39.840536   85759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:39.883410   85759 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:39.883431   85759 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:39.883438   85759 kubeadm.go:934] updating node { 192.168.39.47 8443 v1.31.2 crio true true} ...
	I1104 12:07:39.883531   85759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-325116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:39.883608   85759 ssh_runner.go:195] Run: crio config
	I1104 12:07:39.928280   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:39.928303   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:39.928313   85759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:39.928333   85759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-325116 NodeName:embed-certs-325116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:39.928440   85759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-325116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:39.928495   85759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:39.938496   85759 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:39.938568   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:39.947809   85759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1104 12:07:39.963319   85759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:39.978789   85759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1104 12:07:39.994910   85759 ssh_runner.go:195] Run: grep 192.168.39.47	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:39.998355   85759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:40.010097   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:40.118679   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:40.134369   85759 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116 for IP: 192.168.39.47
	I1104 12:07:40.134391   85759 certs.go:194] generating shared ca certs ...
	I1104 12:07:40.134429   85759 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:40.134612   85759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:40.134666   85759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:40.134680   85759 certs.go:256] generating profile certs ...
	I1104 12:07:40.134782   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/client.key
	I1104 12:07:40.134880   85759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key.36f6fb66
	I1104 12:07:40.134929   85759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key
	I1104 12:07:40.135083   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:40.135124   85759 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:40.135140   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:40.135225   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:40.135281   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:40.135315   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:40.135380   85759 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:40.136240   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:40.179608   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:40.227851   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:40.255791   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:40.281672   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1104 12:07:40.305960   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:07:40.332465   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:40.354950   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/embed-certs-325116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:07:40.377476   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:40.399291   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:40.420689   85759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:40.443610   85759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:40.459706   85759 ssh_runner.go:195] Run: openssl version
	I1104 12:07:40.465244   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:40.475375   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479676   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.479748   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:40.485523   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:40.497163   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:40.509090   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513617   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.513685   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:40.519372   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:40.530944   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:40.542569   85759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.546965   85759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.547019   85759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:40.552470   85759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:40.562456   85759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:40.566967   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:40.572778   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:40.578409   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:40.584134   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:40.589880   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:40.595604   85759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:40.601191   85759 kubeadm.go:392] StartCluster: {Name:embed-certs-325116 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-325116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:40.601329   85759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:40.601385   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.642970   85759 cri.go:89] found id: ""
	I1104 12:07:40.643034   85759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:40.653420   85759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:40.653446   85759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:40.653496   85759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:40.663023   85759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:40.664008   85759 kubeconfig.go:125] found "embed-certs-325116" server: "https://192.168.39.47:8443"
	I1104 12:07:40.665967   85759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:40.675296   85759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.47
	I1104 12:07:40.675324   85759 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:40.675336   85759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:40.675384   85759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:40.718457   85759 cri.go:89] found id: ""
	I1104 12:07:40.718543   85759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:40.733875   85759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:40.743811   85759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:40.743835   85759 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:40.743889   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:07:40.752987   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:40.753048   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:40.762296   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:07:40.771048   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:40.771112   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:40.780163   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.789500   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:40.789566   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:40.799200   85759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:07:40.808061   85759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:40.808121   85759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:40.817445   85759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:40.826803   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.934345   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:40.292591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293050   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:40.293084   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:40.292988   87142 retry.go:31] will retry after 1.110341741s: waiting for machine to come up
	I1104 12:07:41.405407   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405858   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:41.405885   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:41.405800   87142 retry.go:31] will retry after 1.311587036s: waiting for machine to come up
	I1104 12:07:42.719157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:42.719591   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:42.719530   87142 retry.go:31] will retry after 1.999866716s: waiting for machine to come up
	I1104 12:07:44.721872   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722324   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:44.722351   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:44.722278   87142 retry.go:31] will retry after 2.895241769s: waiting for machine to come up
	I1104 12:07:41.512710   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.729355   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.807064   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:41.888493   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:07:41.888593   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.389674   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:42.889373   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.389705   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.889548   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:43.924248   85759 api_server.go:72] duration metric: took 2.035753888s to wait for apiserver process to appear ...
	I1104 12:07:43.924277   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:07:43.924320   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:43.924831   85759 api_server.go:269] stopped: https://192.168.39.47:8443/healthz: Get "https://192.168.39.47:8443/healthz": dial tcp 192.168.39.47:8443: connect: connection refused
	I1104 12:07:44.424651   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.043002   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.043037   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.043054   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.104246   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:07:47.104276   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:07:47.424506   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.430506   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.430544   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:47.924409   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:47.937055   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:07:47.937083   85759 api_server.go:103] status: https://192.168.39.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:07:48.424568   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:07:48.428850   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:07:48.436388   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:07:48.436411   85759 api_server.go:131] duration metric: took 4.512127349s to wait for apiserver health ...
	I1104 12:07:48.436420   85759 cni.go:84] Creating CNI manager for ""
	I1104 12:07:48.436427   85759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:48.438220   85759 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:07:48.439495   85759 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:07:48.449650   85759 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:07:48.467313   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:07:48.480777   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:07:48.480823   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:07:48.480834   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:07:48.480845   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:07:48.480859   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:07:48.480876   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:07:48.480893   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:07:48.480907   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:07:48.480916   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:07:48.480928   85759 system_pods.go:74] duration metric: took 13.592864ms to wait for pod list to return data ...
	I1104 12:07:48.480947   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:07:48.487234   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:07:48.487271   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:07:48.487284   85759 node_conditions.go:105] duration metric: took 6.331259ms to run NodePressure ...
	I1104 12:07:48.487313   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:48.756654   85759 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764840   85759 kubeadm.go:739] kubelet initialised
	I1104 12:07:48.764863   85759 kubeadm.go:740] duration metric: took 8.175857ms waiting for restarted kubelet to initialise ...
	I1104 12:07:48.764871   85759 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:48.772653   85759 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.784158   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784198   85759 pod_ready.go:82] duration metric: took 11.515605ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.784211   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.784220   85759 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.791264   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791297   85759 pod_ready.go:82] duration metric: took 7.066247ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.791310   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "etcd-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.791326   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.798259   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798294   85759 pod_ready.go:82] duration metric: took 6.954559ms for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.798304   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.798312   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:48.872019   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872058   85759 pod_ready.go:82] duration metric: took 73.723761ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:48.872069   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:48.872075   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.271210   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271252   85759 pod_ready.go:82] duration metric: took 399.167509ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.271264   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-proxy-phzgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.271272   85759 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:49.671430   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671453   85759 pod_ready.go:82] duration metric: took 400.174495ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:49.671469   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:49.671475   85759 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:50.070546   85759 pod_ready.go:98] node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070576   85759 pod_ready.go:82] duration metric: took 399.092108ms for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:07:50.070587   85759 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-325116" hosting pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:50.070596   85759 pod_ready.go:39] duration metric: took 1.305717298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:50.070615   85759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:07:50.082815   85759 ops.go:34] apiserver oom_adj: -16
	I1104 12:07:50.082839   85759 kubeadm.go:597] duration metric: took 9.429385589s to restartPrimaryControlPlane
	I1104 12:07:50.082850   85759 kubeadm.go:394] duration metric: took 9.481667011s to StartCluster
	I1104 12:07:50.082871   85759 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.082952   85759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:07:50.086014   85759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:50.086562   85759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:07:50.086628   85759 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:07:50.086740   85759 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-325116"
	I1104 12:07:50.086763   85759 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-325116"
	I1104 12:07:50.086765   85759 config.go:182] Loaded profile config "embed-certs-325116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1104 12:07:50.086776   85759 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:07:50.086774   85759 addons.go:69] Setting default-storageclass=true in profile "embed-certs-325116"
	I1104 12:07:50.086803   85759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-325116"
	I1104 12:07:50.086787   85759 addons.go:69] Setting metrics-server=true in profile "embed-certs-325116"
	I1104 12:07:50.086812   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.086825   85759 addons.go:234] Setting addon metrics-server=true in "embed-certs-325116"
	W1104 12:07:50.086837   85759 addons.go:243] addon metrics-server should already be in state true
	I1104 12:07:50.086866   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.087120   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087148   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087160   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087178   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.087247   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.087286   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.088320   85759 out.go:177] * Verifying Kubernetes components...
	I1104 12:07:50.089814   85759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:50.102796   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I1104 12:07:50.102976   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1104 12:07:50.103076   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1104 12:07:50.103462   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103491   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103566   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.103990   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104014   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104085   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104101   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104199   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.104223   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.104368   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104402   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104545   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.104559   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.104949   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.104987   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.105081   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.105116   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.108134   85759 addons.go:234] Setting addon default-storageclass=true in "embed-certs-325116"
	W1104 12:07:50.108161   85759 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:07:50.108193   85759 host.go:66] Checking if "embed-certs-325116" exists ...
	I1104 12:07:50.108597   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.108648   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.121556   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I1104 12:07:50.122038   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.122504   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.122527   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.122869   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.123107   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.125142   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.125294   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I1104 12:07:50.125613   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.125972   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.125988   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.126279   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.126399   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.127256   85759 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:07:50.127993   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1104 12:07:50.128235   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.128597   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.128843   85759 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.128864   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:07:50.128883   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.129066   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.129088   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.129389   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.129882   85759 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:07:47.619516   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620045   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | unable to find current IP address of domain default-k8s-diff-port-036892 in network mk-default-k8s-diff-port-036892
	I1104 12:07:47.620072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | I1104 12:07:47.620000   87142 retry.go:31] will retry after 3.554669963s: waiting for machine to come up
	I1104 12:07:50.130127   85759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:50.130187   85759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:50.131115   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:07:50.131134   85759 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:07:50.131154   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.131899   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132352   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.132375   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.132664   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.132830   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.132986   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.133099   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.134698   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135217   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.135246   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.135454   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.135629   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.135765   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.135908   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.146618   85759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1104 12:07:50.147639   85759 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:50.148281   85759 main.go:141] libmachine: Using API Version  1
	I1104 12:07:50.148307   85759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:50.148617   85759 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:50.148860   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetState
	I1104 12:07:50.150751   85759 main.go:141] libmachine: (embed-certs-325116) Calling .DriverName
	I1104 12:07:50.151010   85759 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.151028   85759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:07:50.151050   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHHostname
	I1104 12:07:50.153947   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154385   85759 main.go:141] libmachine: (embed-certs-325116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:ab:49", ip: ""} in network mk-embed-certs-325116: {Iface:virbr1 ExpiryTime:2024-11-04 13:07:28 +0000 UTC Type:0 Mac:52:54:00:bd:ab:49 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:embed-certs-325116 Clientid:01:52:54:00:bd:ab:49}
	I1104 12:07:50.154418   85759 main.go:141] libmachine: (embed-certs-325116) DBG | domain embed-certs-325116 has defined IP address 192.168.39.47 and MAC address 52:54:00:bd:ab:49 in network mk-embed-certs-325116
	I1104 12:07:50.154560   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHPort
	I1104 12:07:50.154749   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHKeyPath
	I1104 12:07:50.154886   85759 main.go:141] libmachine: (embed-certs-325116) Calling .GetSSHUsername
	I1104 12:07:50.155028   85759 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/embed-certs-325116/id_rsa Username:docker}
	I1104 12:07:50.278380   85759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:50.294682   85759 node_ready.go:35] waiting up to 6m0s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:50.355769   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:07:50.355790   85759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:07:50.375818   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:07:50.404741   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:07:50.404766   85759 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:07:50.466718   85759 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.466748   85759 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:07:50.493662   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:07:50.503255   85759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:07:50.799735   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.799772   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800039   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800086   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.800094   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:50.800107   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.800159   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.800382   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.800394   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:50.810586   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:50.810857   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:50.810876   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:50.810893   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.484326   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484356   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484671   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484687   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484695   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.484702   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.484899   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.484938   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.484950   85759 addons.go:475] Verifying addon metrics-server=true in "embed-certs-325116"
	I1104 12:07:51.549507   85759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.046214827s)
	I1104 12:07:51.549559   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549569   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.549886   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.549906   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.549909   85759 main.go:141] libmachine: (embed-certs-325116) DBG | Closing plugin on server side
	I1104 12:07:51.549916   85759 main.go:141] libmachine: Making call to close driver server
	I1104 12:07:51.549923   85759 main.go:141] libmachine: (embed-certs-325116) Calling .Close
	I1104 12:07:51.550143   85759 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:07:51.550164   85759 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:07:51.552039   85759 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1104 12:07:52.573915   86402 start.go:364] duration metric: took 3m30.781955626s to acquireMachinesLock for "old-k8s-version-589257"
	I1104 12:07:52.573984   86402 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:07:52.573996   86402 fix.go:54] fixHost starting: 
	I1104 12:07:52.574443   86402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:07:52.574500   86402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:07:52.594310   86402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1104 12:07:52.594822   86402 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:07:52.595317   86402 main.go:141] libmachine: Using API Version  1
	I1104 12:07:52.595347   86402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:07:52.595727   86402 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:07:52.595924   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:07:52.596093   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetState
	I1104 12:07:52.597578   86402 fix.go:112] recreateIfNeeded on old-k8s-version-589257: state=Stopped err=<nil>
	I1104 12:07:52.597615   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	W1104 12:07:52.597752   86402 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:07:52.599659   86402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-589257" ...
	I1104 12:07:51.176791   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177282   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Found IP for machine: 192.168.72.130
	I1104 12:07:51.177313   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has current primary IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.177325   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserving static IP address...
	I1104 12:07:51.177817   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.177863   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | skip adding static IP to network mk-default-k8s-diff-port-036892 - found existing host DHCP lease matching {name: "default-k8s-diff-port-036892", mac: "52:54:00:da:02:d6", ip: "192.168.72.130"}
	I1104 12:07:51.177876   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Reserved static IP address: 192.168.72.130
	I1104 12:07:51.177890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Waiting for SSH to be available...
	I1104 12:07:51.177897   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Getting to WaitForSSH function...
	I1104 12:07:51.180038   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180440   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.180466   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.180581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH client type: external
	I1104 12:07:51.180611   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa (-rw-------)
	I1104 12:07:51.180747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:07:51.180777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | About to run SSH command:
	I1104 12:07:51.180795   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | exit 0
	I1104 12:07:51.309075   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | SSH cmd err, output: <nil>: 
	I1104 12:07:51.309445   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetConfigRaw
	I1104 12:07:51.310162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.312651   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313061   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.313090   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.313460   86301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/config.json ...
	I1104 12:07:51.313720   86301 machine.go:93] provisionDockerMachine start ...
	I1104 12:07:51.313747   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:51.313926   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.316269   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316782   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.316829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.316937   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.317162   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317335   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.317598   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.317777   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.317981   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.317994   86301 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:07:51.441588   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:07:51.441626   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.441876   86301 buildroot.go:166] provisioning hostname "default-k8s-diff-port-036892"
	I1104 12:07:51.441902   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.442097   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.445155   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445637   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.445670   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.445820   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.446013   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446186   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.446352   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.446539   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.446753   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.446773   86301 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-036892 && echo "default-k8s-diff-port-036892" | sudo tee /etc/hostname
	I1104 12:07:51.578973   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-036892
	
	I1104 12:07:51.579004   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.581759   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582105   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.582135   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.582299   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.582455   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582582   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.582712   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.582834   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:51.583006   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:51.583022   86301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-036892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-036892/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-036892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:07:51.702410   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:07:51.702441   86301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:07:51.702471   86301 buildroot.go:174] setting up certificates
	I1104 12:07:51.702483   86301 provision.go:84] configureAuth start
	I1104 12:07:51.702492   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetMachineName
	I1104 12:07:51.702789   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:51.705067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705419   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.705449   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.705567   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.707341   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707627   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.707658   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.707748   86301 provision.go:143] copyHostCerts
	I1104 12:07:51.707805   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:07:51.707818   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:07:51.707870   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:07:51.707969   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:07:51.707978   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:07:51.707999   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:07:51.708061   86301 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:07:51.708067   86301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:07:51.708085   86301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:07:51.708132   86301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-036892 san=[127.0.0.1 192.168.72.130 default-k8s-diff-port-036892 localhost minikube]
	I1104 12:07:51.935898   86301 provision.go:177] copyRemoteCerts
	I1104 12:07:51.935973   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:07:51.936008   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:51.938722   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939100   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:51.939134   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:51.939266   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:51.939462   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:51.939609   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:51.939786   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.027147   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:07:52.054828   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1104 12:07:52.078755   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1104 12:07:52.101312   86301 provision.go:87] duration metric: took 398.817409ms to configureAuth
	I1104 12:07:52.101338   86301 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:07:52.101523   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:07:52.101608   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.104187   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104549   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.104581   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.104700   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.104890   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105028   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.105157   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.105319   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.105490   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.105514   86301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:07:52.331840   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:07:52.331865   86301 machine.go:96] duration metric: took 1.018128337s to provisionDockerMachine
	I1104 12:07:52.331875   86301 start.go:293] postStartSetup for "default-k8s-diff-port-036892" (driver="kvm2")
	I1104 12:07:52.331884   86301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:07:52.331898   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.332229   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:07:52.332261   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.334710   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335005   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.335036   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.335176   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.335342   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.335447   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.335547   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.419392   86301 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:07:52.423306   86301 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:07:52.423335   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:07:52.423396   86301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:07:52.423483   86301 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:07:52.423575   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:07:52.432625   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:52.456616   86301 start.go:296] duration metric: took 124.726284ms for postStartSetup
	I1104 12:07:52.456664   86301 fix.go:56] duration metric: took 17.406645021s for fixHost
	I1104 12:07:52.456689   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.459189   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459540   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.459573   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.459777   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.459967   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460093   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.460218   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.460349   86301 main.go:141] libmachine: Using SSH client type: native
	I1104 12:07:52.460521   86301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I1104 12:07:52.460533   86301 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:07:52.573760   86301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722072.546172571
	
	I1104 12:07:52.573781   86301 fix.go:216] guest clock: 1730722072.546172571
	I1104 12:07:52.573787   86301 fix.go:229] Guest: 2024-11-04 12:07:52.546172571 +0000 UTC Remote: 2024-11-04 12:07:52.45666981 +0000 UTC m=+212.207079326 (delta=89.502761ms)
	I1104 12:07:52.573827   86301 fix.go:200] guest clock delta is within tolerance: 89.502761ms
	I1104 12:07:52.573832   86301 start.go:83] releasing machines lock for "default-k8s-diff-port-036892", held for 17.523849814s
	I1104 12:07:52.573856   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.574109   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:52.576773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577125   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.577151   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.577304   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577776   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.577950   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:07:52.578043   86301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:07:52.578079   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.578133   86301 ssh_runner.go:195] Run: cat /version.json
	I1104 12:07:52.578159   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:07:52.580773   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.580909   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581154   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581179   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:52.581196   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:52.581286   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581488   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:07:52.581529   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581660   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581677   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:07:52.581770   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.581823   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:07:52.581946   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:07:52.683801   86301 ssh_runner.go:195] Run: systemctl --version
	I1104 12:07:52.689498   86301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:07:52.830236   86301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:07:52.835868   86301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:07:52.835951   86301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:07:52.851557   86301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:07:52.851586   86301 start.go:495] detecting cgroup driver to use...
	I1104 12:07:52.851656   86301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:07:52.868648   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:07:52.883434   86301 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:07:52.883507   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:07:52.898233   86301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:07:52.912615   86301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:07:53.036342   86301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:07:53.183326   86301 docker.go:233] disabling docker service ...
	I1104 12:07:53.183407   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:07:53.197465   86301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:07:53.210118   86301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:07:53.354857   86301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:07:53.490760   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:07:53.506829   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:07:53.526401   86301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:07:53.526464   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.537264   86301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:07:53.537339   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.547882   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.558039   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.569347   86301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:07:53.579931   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.589594   86301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.606753   86301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:07:53.623316   86301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:07:53.638183   86301 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:07:53.638245   86301 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:07:53.656452   86301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:07:53.666343   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:53.784882   86301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:07:53.879727   86301 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:07:53.879790   86301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:07:53.884438   86301 start.go:563] Will wait 60s for crictl version
	I1104 12:07:53.884494   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:07:53.887785   86301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:07:53.926395   86301 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:07:53.926496   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.963049   86301 ssh_runner.go:195] Run: crio --version
	I1104 12:07:53.996513   86301 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:07:53.997774   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetIP
	I1104 12:07:54.000829   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001214   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:07:54.001300   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:07:54.001469   86301 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1104 12:07:54.005521   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:54.021723   86301 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:07:54.021915   86301 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:07:54.021979   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:54.072114   86301 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:07:54.072178   86301 ssh_runner.go:195] Run: which lz4
	I1104 12:07:54.077106   86301 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:07:54.081979   86301 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:07:54.082018   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1104 12:07:51.553141   85759 addons.go:510] duration metric: took 1.466523338s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1104 12:07:52.298494   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:54.299895   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:52.600997   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .Start
	I1104 12:07:52.601180   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring networks are active...
	I1104 12:07:52.602131   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network default is active
	I1104 12:07:52.602560   86402 main.go:141] libmachine: (old-k8s-version-589257) Ensuring network mk-old-k8s-version-589257 is active
	I1104 12:07:52.603030   86402 main.go:141] libmachine: (old-k8s-version-589257) Getting domain xml...
	I1104 12:07:52.603859   86402 main.go:141] libmachine: (old-k8s-version-589257) Creating domain...
	I1104 12:07:53.855214   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting to get IP...
	I1104 12:07:53.856063   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:53.856539   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:53.856594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:53.856513   87367 retry.go:31] will retry after 268.725451ms: waiting for machine to come up
	I1104 12:07:54.127094   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.127584   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.127612   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.127560   87367 retry.go:31] will retry after 239.665225ms: waiting for machine to come up
	I1104 12:07:54.369139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.369777   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.369798   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.369710   87367 retry.go:31] will retry after 386.228261ms: waiting for machine to come up
	I1104 12:07:54.757191   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:54.757637   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:54.757665   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:54.757591   87367 retry.go:31] will retry after 571.244573ms: waiting for machine to come up
	I1104 12:07:55.330439   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.331187   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.331216   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.331144   87367 retry.go:31] will retry after 539.328185ms: waiting for machine to come up
	I1104 12:07:55.871869   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:55.872373   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:55.872403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:55.872335   87367 retry.go:31] will retry after 879.285089ms: waiting for machine to come up
	I1104 12:07:55.376802   86301 crio.go:462] duration metric: took 1.299729399s to copy over tarball
	I1104 12:07:55.376881   86301 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:07:57.716230   86301 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.339307666s)
	I1104 12:07:57.716268   86301 crio.go:469] duration metric: took 2.339436958s to extract the tarball
	I1104 12:07:57.716277   86301 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:07:57.753216   86301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:07:57.799042   86301 crio.go:514] all images are preloaded for cri-o runtime.
	I1104 12:07:57.799145   86301 cache_images.go:84] Images are preloaded, skipping loading
	I1104 12:07:57.799161   86301 kubeadm.go:934] updating node { 192.168.72.130 8444 v1.31.2 crio true true} ...
	I1104 12:07:57.799273   86301 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-036892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:07:57.799347   86301 ssh_runner.go:195] Run: crio config
	I1104 12:07:57.851871   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:07:57.851892   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:07:57.851900   86301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:07:57.851919   86301 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-036892 NodeName:default-k8s-diff-port-036892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:07:57.852056   86301 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-036892"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:07:57.852116   86301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:07:57.862269   86301 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:07:57.862343   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:07:57.872253   86301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1104 12:07:57.889328   86301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:07:57.908250   86301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1104 12:07:57.926081   86301 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I1104 12:07:57.929870   86301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:07:57.943872   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:07:58.070141   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:07:58.089370   86301 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892 for IP: 192.168.72.130
	I1104 12:07:58.089397   86301 certs.go:194] generating shared ca certs ...
	I1104 12:07:58.089423   86301 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:07:58.089596   86301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:07:58.089647   86301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:07:58.089659   86301 certs.go:256] generating profile certs ...
	I1104 12:07:58.089765   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/client.key
	I1104 12:07:58.089831   86301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key.713851b2
	I1104 12:07:58.089889   86301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key
	I1104 12:07:58.090054   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:07:58.090100   86301 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:07:58.090116   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:07:58.090149   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:07:58.090184   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:07:58.090219   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:07:58.090279   86301 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:07:58.090977   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:07:58.125282   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:07:58.168289   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:07:58.210967   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:07:58.253986   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1104 12:07:58.280769   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:07:58.308406   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:07:58.334250   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/default-k8s-diff-port-036892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:07:58.363224   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:07:58.391795   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:07:58.420782   86301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:07:58.446611   86301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:07:58.465895   86301 ssh_runner.go:195] Run: openssl version
	I1104 12:07:58.471614   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:07:58.482139   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486533   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.486591   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:07:58.492217   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:07:58.502724   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:07:58.514146   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518243   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.518303   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:07:58.523579   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:07:58.533993   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:07:58.544137   86301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548190   86301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.548250   86301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:07:58.553714   86301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:07:58.564221   86301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:07:58.568445   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:07:58.574072   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:07:58.579551   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:07:58.584909   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:07:58.590102   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:07:58.595227   86301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:07:58.600338   86301 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-036892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-036892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:07:58.600445   86301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:07:58.600492   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.634282   86301 cri.go:89] found id: ""
	I1104 12:07:58.634352   86301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:07:58.644578   86301 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:07:58.644597   86301 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:07:58.644635   86301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:07:58.654412   86301 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:07:58.655638   86301 kubeconfig.go:125] found "default-k8s-diff-port-036892" server: "https://192.168.72.130:8444"
	I1104 12:07:58.658639   86301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:07:58.667867   86301 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I1104 12:07:58.667900   86301 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:07:58.667913   86301 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:07:58.667971   86301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:07:58.702765   86301 cri.go:89] found id: ""
	I1104 12:07:58.702844   86301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:07:58.718368   86301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:07:58.727671   86301 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:07:58.727690   86301 kubeadm.go:157] found existing configuration files:
	
	I1104 12:07:58.727750   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1104 12:07:58.736350   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:07:58.736424   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:07:58.745441   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1104 12:07:58.753945   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:07:58.754011   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:07:58.763134   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.771588   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:07:58.771651   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:07:58.780623   86301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1104 12:07:58.788962   86301 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:07:58.789036   86301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:07:58.798472   86301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:07:58.808209   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:58.919153   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.679355   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.889628   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:07:59.958981   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:00.048061   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:00.048158   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:07:56.798747   85759 node_ready.go:53] node "embed-certs-325116" has status "Ready":"False"
	I1104 12:07:57.799286   85759 node_ready.go:49] node "embed-certs-325116" has status "Ready":"True"
	I1104 12:07:57.799308   85759 node_ready.go:38] duration metric: took 7.504592975s for node "embed-certs-325116" to be "Ready" ...
	I1104 12:07:57.799319   85759 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:07:57.805595   85759 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812394   85759 pod_ready.go:93] pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.812421   85759 pod_ready.go:82] duration metric: took 6.791823ms for pod "coredns-7c65d6cfc9-mf8xg" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.812434   85759 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818338   85759 pod_ready.go:93] pod "etcd-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:07:57.818359   85759 pod_ready.go:82] duration metric: took 5.916571ms for pod "etcd-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:07:57.818400   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:00.015222   85759 pod_ready.go:103] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"False"
	I1104 12:07:56.752983   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:56.753577   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:56.753613   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:56.753542   87367 retry.go:31] will retry after 1.081359862s: waiting for machine to come up
	I1104 12:07:57.836518   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:57.836963   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:57.836990   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:57.836914   87367 retry.go:31] will retry after 1.149571097s: waiting for machine to come up
	I1104 12:07:58.987694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:07:58.988125   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:07:58.988152   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:07:58.988077   87367 retry.go:31] will retry after 1.247311806s: waiting for machine to come up
	I1104 12:08:00.237634   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:00.238147   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:00.238217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:00.238109   87367 retry.go:31] will retry after 2.058125339s: waiting for machine to come up
	I1104 12:08:00.549003   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.048325   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.548502   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:01.563976   86301 api_server.go:72] duration metric: took 1.515915725s to wait for apiserver process to appear ...
	I1104 12:08:01.564003   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:01.564021   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.008662   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.008689   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.008701   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.033053   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:04.033085   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:04.064261   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.084034   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.084062   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:04.564564   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:04.570062   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:04.570090   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.064688   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.069572   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:05.069600   86301 api_server.go:103] status: https://192.168.72.130:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:05.564628   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:08:05.570537   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:08:05.577335   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:05.577360   86301 api_server.go:131] duration metric: took 4.01335048s to wait for apiserver health ...
	I1104 12:08:05.577371   86301 cni.go:84] Creating CNI manager for ""
	I1104 12:08:05.577379   86301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:05.578990   86301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:01.824677   85759 pod_ready.go:93] pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.824703   85759 pod_ready.go:82] duration metric: took 4.006292816s for pod "kube-apiserver-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.824717   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833386   85759 pod_ready.go:93] pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.833415   85759 pod_ready.go:82] duration metric: took 8.688522ms for pod "kube-controller-manager-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.833428   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839346   85759 pod_ready.go:93] pod "kube-proxy-phzgx" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.839370   85759 pod_ready.go:82] duration metric: took 5.933971ms for pod "kube-proxy-phzgx" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.839379   85759 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844449   85759 pod_ready.go:93] pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:01.844476   85759 pod_ready.go:82] duration metric: took 5.08969ms for pod "kube-scheduler-embed-certs-325116" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:01.844490   85759 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:03.852871   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:02.298631   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:02.299046   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:02.299079   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:02.298978   87367 retry.go:31] will retry after 2.664667046s: waiting for machine to come up
	I1104 12:08:04.964700   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:04.965185   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:04.965209   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:04.965135   87367 retry.go:31] will retry after 2.716802395s: waiting for machine to come up
	I1104 12:08:05.580188   86301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:05.591930   86301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:05.609969   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:05.621524   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:05.621559   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:05.621579   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:05.621590   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:05.621599   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:05.621609   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:05.621623   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:05.621637   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:05.621646   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:05.621656   86301 system_pods.go:74] duration metric: took 11.668493ms to wait for pod list to return data ...
	I1104 12:08:05.621669   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:05.626555   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:05.626583   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:05.626600   86301 node_conditions.go:105] duration metric: took 4.924748ms to run NodePressure ...
	I1104 12:08:05.626620   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:05.899159   86301 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905004   86301 kubeadm.go:739] kubelet initialised
	I1104 12:08:05.905027   86301 kubeadm.go:740] duration metric: took 5.831926ms waiting for restarted kubelet to initialise ...
	I1104 12:08:05.905035   86301 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:05.910301   86301 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.917517   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917552   86301 pod_ready.go:82] duration metric: took 7.223252ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.917564   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.917577   86301 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.924077   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924108   86301 pod_ready.go:82] duration metric: took 6.519268ms for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.924123   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.924133   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:05.929584   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929611   86301 pod_ready.go:82] duration metric: took 5.464108ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:05.929625   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:05.929640   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.013629   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013655   86301 pod_ready.go:82] duration metric: took 84.003349ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.013666   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.013674   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.413337   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413362   86301 pod_ready.go:82] duration metric: took 399.676932ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.413372   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-proxy-j2srm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.413379   86301 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:06.813910   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813948   86301 pod_ready.go:82] duration metric: took 400.558541ms for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:06.813962   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.813971   86301 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:07.213603   86301 pod_ready.go:98] node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213632   86301 pod_ready.go:82] duration metric: took 399.645898ms for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:07.213642   86301 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-036892" hosting pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:07.213650   86301 pod_ready.go:39] duration metric: took 1.308606058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:07.213664   86301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:07.224946   86301 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:07.224972   86301 kubeadm.go:597] duration metric: took 8.580368331s to restartPrimaryControlPlane
	I1104 12:08:07.224984   86301 kubeadm.go:394] duration metric: took 8.624649305s to StartCluster
	I1104 12:08:07.225005   86301 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.225093   86301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:07.226601   86301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:07.226848   86301 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:07.226980   86301 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:07.227075   86301 config.go:182] Loaded profile config "default-k8s-diff-port-036892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:07.227096   86301 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227115   86301 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:07.227110   86301 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-036892"
	W1104 12:08:07.227128   86301 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:07.227145   86301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-036892"
	I1104 12:08:07.227161   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227082   86301 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-036892"
	I1104 12:08:07.227275   86301 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.227291   86301 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:07.227316   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.227494   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227529   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227592   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227620   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.227634   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.227655   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.228583   86301 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:07.229927   86301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:07.242580   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I1104 12:08:07.243096   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.243659   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.243678   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.243954   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I1104 12:08:07.244058   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.244513   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.244634   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.244679   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245015   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.245035   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.245437   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.245905   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.245942   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.245963   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I1104 12:08:07.246281   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.246725   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.246748   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.247084   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.247294   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.250833   86301 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-036892"
	W1104 12:08:07.250857   86301 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:07.250884   86301 host.go:66] Checking if "default-k8s-diff-port-036892" exists ...
	I1104 12:08:07.251243   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.251285   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.261670   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1104 12:08:07.261736   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I1104 12:08:07.262154   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262283   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.262803   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262821   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.262916   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.262927   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.263218   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263282   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.263411   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.263457   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.265067   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.265574   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.267307   86301 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:07.267336   86301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:07.268853   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:07.268874   86301 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:07.268895   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.268976   86301 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.268994   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:07.269011   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.271584   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I1104 12:08:07.272047   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.272347   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272377   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272688   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.272707   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.272933   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.272959   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.272990   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.273007   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.273065   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.273149   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273564   86301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:07.273597   86301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:07.273765   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.273767   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273925   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.273966   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274049   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.274098   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.274179   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.288474   86301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I1104 12:08:07.288955   86301 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:07.289555   86301 main.go:141] libmachine: Using API Version  1
	I1104 12:08:07.289580   86301 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:07.289915   86301 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:07.290128   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetState
	I1104 12:08:07.291744   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .DriverName
	I1104 12:08:07.291944   86301 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.291958   86301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:07.291972   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHHostname
	I1104 12:08:07.294477   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.294793   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:02:d6", ip: ""} in network mk-default-k8s-diff-port-036892: {Iface:virbr4 ExpiryTime:2024-11-04 13:07:45 +0000 UTC Type:0 Mac:52:54:00:da:02:d6 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:default-k8s-diff-port-036892 Clientid:01:52:54:00:da:02:d6}
	I1104 12:08:07.294824   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | domain default-k8s-diff-port-036892 has defined IP address 192.168.72.130 and MAC address 52:54:00:da:02:d6 in network mk-default-k8s-diff-port-036892
	I1104 12:08:07.295009   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHPort
	I1104 12:08:07.295178   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHKeyPath
	I1104 12:08:07.295326   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .GetSSHUsername
	I1104 12:08:07.295444   86301 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/default-k8s-diff-port-036892/id_rsa Username:docker}
	I1104 12:08:07.430295   86301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:07.461396   86301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:07.523117   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:07.542339   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:07.542361   86301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:07.566207   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:07.566232   86301 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:07.580871   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:07.596309   86301 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:07.596338   86301 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:07.626662   86301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:08.553268   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553295   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553315   86301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030165078s)
	I1104 12:08:08.553352   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553373   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553656   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553673   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553683   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553692   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553739   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553759   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.553767   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.553780   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.553925   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.553942   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.554106   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.554138   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.554155   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.559615   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.559635   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.559944   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.559961   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.563833   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.563848   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564072   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564636   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564653   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564666   86301 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:08.564671   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) Calling .Close
	I1104 12:08:08.564894   86301 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:08.564906   86301 main.go:141] libmachine: (default-k8s-diff-port-036892) DBG | Closing plugin on server side
	I1104 12:08:08.564912   86301 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:08.564940   86301 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-036892"
	I1104 12:08:08.566838   86301 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:08.568165   86301 addons.go:510] duration metric: took 1.341200959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:09.465405   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:06.350759   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:08.850563   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:10.851315   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:07.683582   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:07.684143   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | unable to find current IP address of domain old-k8s-version-589257 in network mk-old-k8s-version-589257
	I1104 12:08:07.684172   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | I1104 12:08:07.684093   87367 retry.go:31] will retry after 2.880856513s: waiting for machine to come up
	I1104 12:08:10.566197   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.566657   86402 main.go:141] libmachine: (old-k8s-version-589257) Found IP for machine: 192.168.50.180
	I1104 12:08:10.566675   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserving static IP address...
	I1104 12:08:10.566687   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has current primary IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.567139   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.567166   86402 main.go:141] libmachine: (old-k8s-version-589257) Reserved static IP address: 192.168.50.180
	I1104 12:08:10.567186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | skip adding static IP to network mk-old-k8s-version-589257 - found existing host DHCP lease matching {name: "old-k8s-version-589257", mac: "52:54:00:6b:6c:11", ip: "192.168.50.180"}
	I1104 12:08:10.567199   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Getting to WaitForSSH function...
	I1104 12:08:10.567213   86402 main.go:141] libmachine: (old-k8s-version-589257) Waiting for SSH to be available...
	I1104 12:08:10.569500   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569816   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.569846   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.569982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH client type: external
	I1104 12:08:10.570004   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa (-rw-------)
	I1104 12:08:10.570025   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:10.570033   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | About to run SSH command:
	I1104 12:08:10.570041   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | exit 0
	I1104 12:08:10.697114   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:10.697552   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetConfigRaw
	I1104 12:08:10.698196   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:10.700982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701369   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.701403   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.701649   86402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/config.json ...
	I1104 12:08:10.701875   86402 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:10.701898   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:10.702099   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.704605   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.704977   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.705006   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.705151   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.705342   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705486   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.705602   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.705703   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.705907   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.705918   86402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:10.813494   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:10.813544   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.813816   86402 buildroot.go:166] provisioning hostname "old-k8s-version-589257"
	I1104 12:08:10.813847   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:10.814034   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.816782   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817186   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.817245   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.817394   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.817589   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817760   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.817882   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.818027   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.818227   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.818245   86402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-589257 && echo "old-k8s-version-589257" | sudo tee /etc/hostname
	I1104 12:08:10.940779   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-589257
	
	I1104 12:08:10.940803   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:10.943694   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944062   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:10.944090   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:10.944263   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:10.944452   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944627   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:10.944767   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:10.944910   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:10.945093   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:10.945110   86402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-589257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-589257/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-589257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:11.061924   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:11.061966   86402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:11.062007   86402 buildroot.go:174] setting up certificates
	I1104 12:08:11.062021   86402 provision.go:84] configureAuth start
	I1104 12:08:11.062033   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetMachineName
	I1104 12:08:11.062293   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.065165   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065559   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.065594   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.065834   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.068257   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068620   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.068646   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.068787   86402 provision.go:143] copyHostCerts
	I1104 12:08:11.068842   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:11.068854   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:11.068904   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:11.068993   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:11.069000   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:11.069019   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:11.069072   86402 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:11.069079   86402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:11.069097   86402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:11.069191   86402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-589257 san=[127.0.0.1 192.168.50.180 localhost minikube old-k8s-version-589257]
	I1104 12:08:11.271880   86402 provision.go:177] copyRemoteCerts
	I1104 12:08:11.271946   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:11.271988   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.275023   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275396   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.275428   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.275701   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.275905   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.276048   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.276182   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.362968   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:11.388401   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1104 12:08:11.417180   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:11.439810   86402 provision.go:87] duration metric: took 377.778325ms to configureAuth
	I1104 12:08:11.439841   86402 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:11.440043   86402 config.go:182] Loaded profile config "old-k8s-version-589257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 12:08:11.440110   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.442476   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.442783   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.442818   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.443005   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.443204   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443329   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.443492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.443665   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.443822   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.443837   86402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:11.662212   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:11.662241   86402 machine.go:96] duration metric: took 960.351823ms to provisionDockerMachine
	I1104 12:08:11.662256   86402 start.go:293] postStartSetup for "old-k8s-version-589257" (driver="kvm2")
	I1104 12:08:11.662269   86402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:11.662289   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.662613   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:11.662642   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.665028   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665391   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.665420   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.665598   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.665776   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.665942   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.666064   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.889727   85500 start.go:364] duration metric: took 49.147423989s to acquireMachinesLock for "no-preload-908370"
	I1104 12:08:11.889796   85500 start.go:96] Skipping create...Using existing machine configuration
	I1104 12:08:11.889806   85500 fix.go:54] fixHost starting: 
	I1104 12:08:11.890201   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:11.890229   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:11.906978   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I1104 12:08:11.907524   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:11.907916   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:11.907939   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:11.908319   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:11.908518   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:11.908672   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:11.910182   85500 fix.go:112] recreateIfNeeded on no-preload-908370: state=Stopped err=<nil>
	I1104 12:08:11.910224   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	W1104 12:08:11.910353   85500 fix.go:138] unexpected machine state, will restart: <nil>
	I1104 12:08:11.912457   85500 out.go:177] * Restarting existing kvm2 VM for "no-preload-908370" ...
	I1104 12:08:11.747199   86402 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:11.751253   86402 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:11.751279   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:11.751356   86402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:11.751465   86402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:11.751591   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:11.760409   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:11.781890   86402 start.go:296] duration metric: took 119.620604ms for postStartSetup
	I1104 12:08:11.781934   86402 fix.go:56] duration metric: took 19.207938878s for fixHost
	I1104 12:08:11.781960   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.784767   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785058   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.785084   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.785300   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.785500   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785644   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.785750   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.785877   86402 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:11.786047   86402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1104 12:08:11.786059   86402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:11.889540   86402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722091.863405264
	
	I1104 12:08:11.889568   86402 fix.go:216] guest clock: 1730722091.863405264
	I1104 12:08:11.889578   86402 fix.go:229] Guest: 2024-11-04 12:08:11.863405264 +0000 UTC Remote: 2024-11-04 12:08:11.781939603 +0000 UTC m=+230.132769870 (delta=81.465661ms)
	I1104 12:08:11.889631   86402 fix.go:200] guest clock delta is within tolerance: 81.465661ms
	I1104 12:08:11.889641   86402 start.go:83] releasing machines lock for "old-k8s-version-589257", held for 19.315682928s
	I1104 12:08:11.889677   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.889975   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:11.892654   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.892982   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.893012   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.893212   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893706   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893888   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .DriverName
	I1104 12:08:11.893989   86402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:11.894031   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.894074   86402 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:11.894094   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHHostname
	I1104 12:08:11.896812   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897020   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897192   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897217   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897454   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:11.897478   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:11.897492   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897631   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHPort
	I1104 12:08:11.897646   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897778   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHKeyPath
	I1104 12:08:11.897911   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.897989   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetSSHUsername
	I1104 12:08:11.898083   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.898120   86402 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/old-k8s-version-589257/id_rsa Username:docker}
	I1104 12:08:11.998704   86402 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:12.004820   86402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:12.148742   86402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:12.155015   86402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:12.155089   86402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:12.171054   86402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:12.171085   86402 start.go:495] detecting cgroup driver to use...
	I1104 12:08:12.171154   86402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:12.189977   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:12.204622   86402 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:12.204679   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:12.218808   86402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:12.232276   86402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:12.341220   86402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:12.512813   86402 docker.go:233] disabling docker service ...
	I1104 12:08:12.512893   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:12.526784   86402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:12.539774   86402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:12.666162   86402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:12.788317   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:12.802703   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:12.820915   86402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1104 12:08:12.820985   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.831311   86402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:12.831400   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.841625   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.852548   86402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:12.864683   86402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:12.876794   86402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:12.886878   86402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:12.886943   86402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:12.902476   86402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:12.914565   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:13.044125   86402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:13.149816   86402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:13.149893   86402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:13.154639   86402 start.go:563] Will wait 60s for crictl version
	I1104 12:08:13.154706   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:13.158788   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:13.200038   86402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:13.200117   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.233501   86402 ssh_runner.go:195] Run: crio --version
	I1104 12:08:13.264558   86402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1104 12:08:11.913730   85500 main.go:141] libmachine: (no-preload-908370) Calling .Start
	I1104 12:08:11.913915   85500 main.go:141] libmachine: (no-preload-908370) Ensuring networks are active...
	I1104 12:08:11.914653   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network default is active
	I1104 12:08:11.915111   85500 main.go:141] libmachine: (no-preload-908370) Ensuring network mk-no-preload-908370 is active
	I1104 12:08:11.915575   85500 main.go:141] libmachine: (no-preload-908370) Getting domain xml...
	I1104 12:08:11.916375   85500 main.go:141] libmachine: (no-preload-908370) Creating domain...
	I1104 12:08:13.289793   85500 main.go:141] libmachine: (no-preload-908370) Waiting to get IP...
	I1104 12:08:13.290880   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.291498   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.291631   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.291463   87562 retry.go:31] will retry after 277.090671ms: waiting for machine to come up
	I1104 12:08:13.570141   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.570726   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.570749   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.570623   87562 retry.go:31] will retry after 259.985785ms: waiting for machine to come up
	I1104 12:08:13.832172   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:13.832855   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:13.832898   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:13.832809   87562 retry.go:31] will retry after 473.426945ms: waiting for machine to come up
	I1104 12:08:14.308725   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.309273   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.309302   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.309249   87562 retry.go:31] will retry after 417.466134ms: waiting for machine to come up
	I1104 12:08:14.727927   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:14.728388   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:14.728413   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:14.728366   87562 retry.go:31] will retry after 734.894622ms: waiting for machine to come up
	I1104 12:08:11.465894   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:13.966921   86301 node_ready.go:53] node "default-k8s-diff-port-036892" has status "Ready":"False"
	I1104 12:08:14.465523   86301 node_ready.go:49] node "default-k8s-diff-port-036892" has status "Ready":"True"
	I1104 12:08:14.465545   86301 node_ready.go:38] duration metric: took 7.004111382s for node "default-k8s-diff-port-036892" to be "Ready" ...
	I1104 12:08:14.465554   86301 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:14.473334   86301 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482486   86301 pod_ready.go:93] pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:14.482508   86301 pod_ready.go:82] duration metric: took 9.145998ms for pod "coredns-7c65d6cfc9-zw2tv" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:14.482518   86301 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:13.351753   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:15.851818   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:13.266087   86402 main.go:141] libmachine: (old-k8s-version-589257) Calling .GetIP
	I1104 12:08:13.269660   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270200   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:6c:11", ip: ""} in network mk-old-k8s-version-589257: {Iface:virbr2 ExpiryTime:2024-11-04 13:08:03 +0000 UTC Type:0 Mac:52:54:00:6b:6c:11 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:old-k8s-version-589257 Clientid:01:52:54:00:6b:6c:11}
	I1104 12:08:13.270233   86402 main.go:141] libmachine: (old-k8s-version-589257) DBG | domain old-k8s-version-589257 has defined IP address 192.168.50.180 and MAC address 52:54:00:6b:6c:11 in network mk-old-k8s-version-589257
	I1104 12:08:13.270520   86402 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:13.274751   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:13.290348   86402 kubeadm.go:883] updating cluster {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:13.290483   86402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 12:08:13.290547   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:13.340338   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:13.340426   86402 ssh_runner.go:195] Run: which lz4
	I1104 12:08:13.345147   86402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1104 12:08:13.349792   86402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1104 12:08:13.349872   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1104 12:08:14.842720   86402 crio.go:462] duration metric: took 1.497615031s to copy over tarball
	I1104 12:08:14.842791   86402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1104 12:08:15.464914   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:15.465510   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:15.465541   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:15.465478   87562 retry.go:31] will retry after 578.01955ms: waiting for machine to come up
	I1104 12:08:16.044861   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:16.045354   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:16.045380   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:16.045313   87562 retry.go:31] will retry after 1.136035438s: waiting for machine to come up
	I1104 12:08:17.182829   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:17.183255   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:17.183282   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:17.183233   87562 retry.go:31] will retry after 1.070971462s: waiting for machine to come up
	I1104 12:08:18.255532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:18.256051   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:18.256078   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:18.256007   87562 retry.go:31] will retry after 1.542250267s: waiting for machine to come up
	I1104 12:08:19.800851   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:19.801298   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:19.801324   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:19.801276   87562 retry.go:31] will retry after 2.127250885s: waiting for machine to come up
	I1104 12:08:16.489394   86301 pod_ready.go:103] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:16.994480   86301 pod_ready.go:93] pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:16.994502   86301 pod_ready.go:82] duration metric: took 2.511977586s for pod "etcd-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:16.994512   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502472   86301 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.502499   86301 pod_ready.go:82] duration metric: took 507.979218ms for pod "kube-apiserver-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.502513   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507763   86301 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.507785   86301 pod_ready.go:82] duration metric: took 5.264185ms for pod "kube-controller-manager-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.507795   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514017   86301 pod_ready.go:93] pod "kube-proxy-j2srm" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:17.514045   86301 pod_ready.go:82] duration metric: took 6.241799ms for pod "kube-proxy-j2srm" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:17.514058   86301 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:19.683083   86301 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.049735   86301 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace has status "Ready":"True"
	I1104 12:08:20.049759   86301 pod_ready.go:82] duration metric: took 2.535691306s for pod "kube-scheduler-default-k8s-diff-port-036892" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:20.049772   86301 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:18.749494   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:20.853448   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:17.837381   86402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994557811s)
	I1104 12:08:17.837410   86402 crio.go:469] duration metric: took 2.994665886s to extract the tarball
	I1104 12:08:17.837420   86402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1104 12:08:17.882418   86402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:17.917035   86402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1104 12:08:17.917064   86402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:17.917195   86402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.917169   86402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.917164   86402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.917150   86402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.917277   86402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.917283   86402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.917254   86402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:17.918943   86402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:17.918929   86402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:17.918930   86402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1104 12:08:17.919014   86402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:17.919025   86402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.070119   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.076604   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.078712   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.083777   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.087827   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.092838   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.110359   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1104 12:08:18.165523   86402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1104 12:08:18.165569   86402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.165617   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.213723   86402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1104 12:08:18.213784   86402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.213833   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.252171   86402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1104 12:08:18.252221   86402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.252270   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256482   86402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1104 12:08:18.256522   86402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.256567   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256606   86402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1104 12:08:18.256564   86402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1104 12:08:18.256631   86402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.256632   86402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.256632   86402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1104 12:08:18.256690   86402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1104 12:08:18.256657   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256703   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.256691   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.256738   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.256658   86402 ssh_runner.go:195] Run: which crictl
	I1104 12:08:18.264837   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.265836   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.349896   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.349935   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.350014   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.350077   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.368533   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.371302   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.371393   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.496042   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1104 12:08:18.496121   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.509196   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.509339   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1104 12:08:18.509247   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.509348   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1104 12:08:18.513943   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1104 12:08:18.645867   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1104 12:08:18.649173   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1104 12:08:18.649276   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1104 12:08:18.656159   86402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1104 12:08:18.656193   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1104 12:08:18.660309   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1104 12:08:18.660384   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1104 12:08:18.719995   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1104 12:08:18.720033   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1104 12:08:18.728304   86402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1104 12:08:18.867880   86402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:19.009342   86402 cache_images.go:92] duration metric: took 1.092257593s to LoadCachedImages
	W1104 12:08:19.009448   86402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1104 12:08:19.009469   86402 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.20.0 crio true true} ...
	I1104 12:08:19.009590   86402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-589257 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:19.009671   86402 ssh_runner.go:195] Run: crio config
	I1104 12:08:19.054831   86402 cni.go:84] Creating CNI manager for ""
	I1104 12:08:19.054850   86402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:19.054863   86402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:19.054880   86402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-589257 NodeName:old-k8s-version-589257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1104 12:08:19.055049   86402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-589257"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:19.055125   86402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1104 12:08:19.065804   86402 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:19.065888   86402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:19.075491   86402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1104 12:08:19.092371   86402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:19.108896   86402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1104 12:08:19.127622   86402 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:19.131597   86402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:19.145142   86402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:19.284780   86402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:19.303843   86402 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257 for IP: 192.168.50.180
	I1104 12:08:19.303872   86402 certs.go:194] generating shared ca certs ...
	I1104 12:08:19.303894   86402 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.304084   86402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:19.304148   86402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:19.304161   86402 certs.go:256] generating profile certs ...
	I1104 12:08:19.304280   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/client.key
	I1104 12:08:19.304347   86402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key.b78bafdb
	I1104 12:08:19.304401   86402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key
	I1104 12:08:19.304549   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:19.304590   86402 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:19.304608   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:19.304659   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:19.304702   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:19.304729   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:19.304794   86402 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:19.305479   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:19.341333   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:19.375179   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:19.410128   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:19.452565   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1104 12:08:19.493404   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1104 12:08:19.521178   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:19.550524   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/old-k8s-version-589257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1104 12:08:19.574903   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:19.599308   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:19.627107   86402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:19.657121   86402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:19.679087   86402 ssh_runner.go:195] Run: openssl version
	I1104 12:08:19.687115   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:19.702537   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707340   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.707408   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:19.714955   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:19.727883   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:19.739690   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744600   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.744656   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:19.750324   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:19.760988   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:19.772634   86402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777504   86402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.777580   86402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:19.783660   86402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:19.795483   86402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:19.800327   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:19.806346   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:19.813920   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:19.820358   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:19.826359   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:19.832467   86402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:19.838902   86402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-589257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-589257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:19.839018   86402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:19.839075   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.880407   86402 cri.go:89] found id: ""
	I1104 12:08:19.880486   86402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:19.891135   86402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:19.891156   86402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:19.891219   86402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:19.901437   86402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:19.902325   86402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-589257" does not appear in /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:19.902941   86402 kubeconfig.go:62] /home/jenkins/minikube-integration/19906-19898/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-589257" cluster setting kubeconfig missing "old-k8s-version-589257" context setting]
	I1104 12:08:19.903879   86402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:19.937877   86402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:19.948669   86402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.180
	I1104 12:08:19.948701   86402 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:19.948711   86402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:19.948752   86402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:19.988249   86402 cri.go:89] found id: ""
	I1104 12:08:19.988344   86402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:20.006949   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:20.020677   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:20.020700   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:20.020747   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:20.031509   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:20.031566   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:20.042229   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:20.054695   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:20.054810   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:20.067410   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.078639   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:20.078711   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:20.091357   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:20.100986   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:20.101071   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:20.110345   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:20.119778   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:20.281637   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.006838   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.234671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.335720   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:21.437522   86402 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:21.437615   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:21.929963   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:21.930522   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:21.930552   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:21.930461   87562 retry.go:31] will retry after 2.171964123s: waiting for machine to come up
	I1104 12:08:24.103844   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:24.104303   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:24.104326   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:24.104257   87562 retry.go:31] will retry after 2.838813818s: waiting for machine to come up
	I1104 12:08:22.056858   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:24.057127   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:23.351405   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:25.850834   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:21.938086   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.438198   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:22.938624   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.438021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:23.938119   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.438470   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:24.937687   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.438045   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:25.937696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.438585   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:26.944977   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:26.945367   85500 main.go:141] libmachine: (no-preload-908370) DBG | unable to find current IP address of domain no-preload-908370 in network mk-no-preload-908370
	I1104 12:08:26.945395   85500 main.go:141] libmachine: (no-preload-908370) DBG | I1104 12:08:26.945349   87562 retry.go:31] will retry after 2.799785534s: waiting for machine to come up
	I1104 12:08:29.746349   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746747   85500 main.go:141] libmachine: (no-preload-908370) Found IP for machine: 192.168.61.91
	I1104 12:08:29.746774   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has current primary IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.746779   85500 main.go:141] libmachine: (no-preload-908370) Reserving static IP address...
	I1104 12:08:29.747195   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.747218   85500 main.go:141] libmachine: (no-preload-908370) Reserved static IP address: 192.168.61.91
	I1104 12:08:29.747234   85500 main.go:141] libmachine: (no-preload-908370) DBG | skip adding static IP to network mk-no-preload-908370 - found existing host DHCP lease matching {name: "no-preload-908370", mac: "52:54:00:f8:66:d5", ip: "192.168.61.91"}
	I1104 12:08:29.747248   85500 main.go:141] libmachine: (no-preload-908370) DBG | Getting to WaitForSSH function...
	I1104 12:08:29.747258   85500 main.go:141] libmachine: (no-preload-908370) Waiting for SSH to be available...
	I1104 12:08:29.749405   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749694   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.749728   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.749887   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH client type: external
	I1104 12:08:29.749908   85500 main.go:141] libmachine: (no-preload-908370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa (-rw-------)
	I1104 12:08:29.749933   85500 main.go:141] libmachine: (no-preload-908370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1104 12:08:29.749951   85500 main.go:141] libmachine: (no-preload-908370) DBG | About to run SSH command:
	I1104 12:08:29.749966   85500 main.go:141] libmachine: (no-preload-908370) DBG | exit 0
	I1104 12:08:29.873121   85500 main.go:141] libmachine: (no-preload-908370) DBG | SSH cmd err, output: <nil>: 
	I1104 12:08:29.873472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetConfigRaw
	I1104 12:08:29.874081   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:29.876737   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877127   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.877155   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.877473   85500 profile.go:143] Saving config to /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/config.json ...
	I1104 12:08:29.877717   85500 machine.go:93] provisionDockerMachine start ...
	I1104 12:08:29.877740   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:29.877936   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.880272   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880522   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.880543   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.880718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.880883   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881048   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.881186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.881338   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.881511   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.881524   85500 main.go:141] libmachine: About to run SSH command:
	hostname
	I1104 12:08:29.989431   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1104 12:08:29.989460   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989725   85500 buildroot.go:166] provisioning hostname "no-preload-908370"
	I1104 12:08:29.989757   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:29.989974   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:29.992679   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993028   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:29.993057   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:29.993222   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:29.993425   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993553   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:29.993683   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:29.993817   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:29.994000   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:29.994016   85500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-908370 && echo "no-preload-908370" | sudo tee /etc/hostname
	I1104 12:08:30.118321   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-908370
	
	I1104 12:08:30.118361   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.121095   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121475   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.121509   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.121697   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.121866   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122040   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.122176   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.122343   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.122525   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.122547   85500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-908370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-908370/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-908370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1104 12:08:26.557368   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:29.056377   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:28.349510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:30.350431   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:26.937831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:27.938240   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.438463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:28.937958   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.437676   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:29.938298   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.937953   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:31.438075   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:30.237340   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1104 12:08:30.237370   85500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19906-19898/.minikube CaCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19906-19898/.minikube}
	I1104 12:08:30.237413   85500 buildroot.go:174] setting up certificates
	I1104 12:08:30.237429   85500 provision.go:84] configureAuth start
	I1104 12:08:30.237446   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetMachineName
	I1104 12:08:30.237725   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:30.240026   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240350   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.240380   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.240472   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.242777   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243101   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.243119   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.243302   85500 provision.go:143] copyHostCerts
	I1104 12:08:30.243358   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem, removing ...
	I1104 12:08:30.243368   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem
	I1104 12:08:30.243427   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/ca.pem (1078 bytes)
	I1104 12:08:30.243532   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem, removing ...
	I1104 12:08:30.243542   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem
	I1104 12:08:30.243565   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/cert.pem (1123 bytes)
	I1104 12:08:30.243635   85500 exec_runner.go:144] found /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem, removing ...
	I1104 12:08:30.243643   85500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem
	I1104 12:08:30.243661   85500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19906-19898/.minikube/key.pem (1679 bytes)
	I1104 12:08:30.243719   85500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem org=jenkins.no-preload-908370 san=[127.0.0.1 192.168.61.91 localhost minikube no-preload-908370]
	I1104 12:08:30.515270   85500 provision.go:177] copyRemoteCerts
	I1104 12:08:30.515350   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1104 12:08:30.515381   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.518651   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519188   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.519218   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.519420   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.519600   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.519777   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.519896   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:30.603170   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1104 12:08:30.626226   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1104 12:08:30.649353   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1104 12:08:30.684759   85500 provision.go:87] duration metric: took 447.313588ms to configureAuth
	I1104 12:08:30.684789   85500 buildroot.go:189] setting minikube options for container-runtime
	I1104 12:08:30.684962   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:30.685029   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.687429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.687815   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.687840   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.688015   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.688192   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688325   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.688471   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.688640   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:30.688830   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:30.688848   85500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1104 12:08:30.919118   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1104 12:08:30.919142   85500 machine.go:96] duration metric: took 1.041410402s to provisionDockerMachine
	I1104 12:08:30.919156   85500 start.go:293] postStartSetup for "no-preload-908370" (driver="kvm2")
	I1104 12:08:30.919169   85500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1104 12:08:30.919200   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:30.919513   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1104 12:08:30.919538   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:30.922075   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922485   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:30.922510   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:30.922615   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:30.922823   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:30.922991   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:30.923107   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.007598   85500 ssh_runner.go:195] Run: cat /etc/os-release
	I1104 12:08:31.011558   85500 info.go:137] Remote host: Buildroot 2023.02.9
	I1104 12:08:31.011588   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/addons for local assets ...
	I1104 12:08:31.011665   85500 filesync.go:126] Scanning /home/jenkins/minikube-integration/19906-19898/.minikube/files for local assets ...
	I1104 12:08:31.011766   85500 filesync.go:149] local asset: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem -> 272182.pem in /etc/ssl/certs
	I1104 12:08:31.011859   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1104 12:08:31.020788   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:31.044379   85500 start.go:296] duration metric: took 125.209775ms for postStartSetup
	I1104 12:08:31.044414   85500 fix.go:56] duration metric: took 19.154609071s for fixHost
	I1104 12:08:31.044442   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.047152   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047426   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.047461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.047639   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.047829   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.047976   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.048138   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.048296   85500 main.go:141] libmachine: Using SSH client type: native
	I1104 12:08:31.048464   85500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I1104 12:08:31.048474   85500 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1104 12:08:31.157723   85500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730722111.115015995
	
	I1104 12:08:31.157747   85500 fix.go:216] guest clock: 1730722111.115015995
	I1104 12:08:31.157758   85500 fix.go:229] Guest: 2024-11-04 12:08:31.115015995 +0000 UTC Remote: 2024-11-04 12:08:31.044427312 +0000 UTC m=+350.890212897 (delta=70.588683ms)
	I1104 12:08:31.157829   85500 fix.go:200] guest clock delta is within tolerance: 70.588683ms
	I1104 12:08:31.157841   85500 start.go:83] releasing machines lock for "no-preload-908370", held for 19.268070408s
	I1104 12:08:31.157875   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.158131   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:31.160806   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161159   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.161191   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.161371   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.161907   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162092   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:31.162174   85500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1104 12:08:31.162217   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.162444   85500 ssh_runner.go:195] Run: cat /version.json
	I1104 12:08:31.162470   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:31.165069   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165316   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165505   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165532   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165656   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.165771   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:31.165795   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:31.165842   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166006   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:31.166024   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166186   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:31.166183   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.166327   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:31.166449   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:31.267746   85500 ssh_runner.go:195] Run: systemctl --version
	I1104 12:08:31.273307   85500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1104 12:08:31.410198   85500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1104 12:08:31.416652   85500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1104 12:08:31.416726   85500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1104 12:08:31.432260   85500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1104 12:08:31.432288   85500 start.go:495] detecting cgroup driver to use...
	I1104 12:08:31.432345   85500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1104 12:08:31.453134   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1104 12:08:31.467457   85500 docker.go:217] disabling cri-docker service (if available) ...
	I1104 12:08:31.467516   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1104 12:08:31.481392   85500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1104 12:08:31.495740   85500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1104 12:08:31.617549   85500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1104 12:08:31.802455   85500 docker.go:233] disabling docker service ...
	I1104 12:08:31.802511   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1104 12:08:31.815534   85500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1104 12:08:31.827495   85500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1104 12:08:31.938344   85500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1104 12:08:32.042827   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1104 12:08:32.056126   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1104 12:08:32.074274   85500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1104 12:08:32.074337   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.084061   85500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1104 12:08:32.084138   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.093533   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.104351   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.113753   85500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1104 12:08:32.123391   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.133089   85500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.149073   85500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1104 12:08:32.159888   85500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1104 12:08:32.169208   85500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1104 12:08:32.169279   85500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1104 12:08:32.181319   85500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1104 12:08:32.192472   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:32.300710   85500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1104 12:08:32.386906   85500 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1104 12:08:32.386980   85500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1104 12:08:32.391498   85500 start.go:563] Will wait 60s for crictl version
	I1104 12:08:32.391554   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.395471   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1104 12:08:32.439094   85500 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1104 12:08:32.439168   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.466609   85500 ssh_runner.go:195] Run: crio --version
	I1104 12:08:32.499305   85500 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1104 12:08:32.500825   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetIP
	I1104 12:08:32.503461   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.503827   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:32.503857   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:32.504039   85500 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1104 12:08:32.508082   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:32.520202   85500 kubeadm.go:883] updating cluster {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1104 12:08:32.520359   85500 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 12:08:32.520402   85500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1104 12:08:32.553752   85500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1104 12:08:32.553781   85500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.553844   85500 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.553868   85500 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.553853   85500 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.553886   85500 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1104 12:08:32.553925   85500 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.553969   85500 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.553978   85500 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555506   85500 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.555518   85500 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.555510   85500 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.555513   85500 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.555591   85500 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:32.555601   85500 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.555514   85500 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.555658   85500 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1104 12:08:32.706982   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.707334   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.712904   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.721917   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.727829   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.741130   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1104 12:08:32.743716   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.796406   85500 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1104 12:08:32.796448   85500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.796502   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.814658   85500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1104 12:08:32.814697   85500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.814735   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.828308   85500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1104 12:08:32.828362   85500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.828416   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.882090   85500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1104 12:08:32.882140   85500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:32.882205   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.886473   85500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1104 12:08:32.886518   85500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.886567   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956331   85500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1104 12:08:32.956394   85500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:32.956414   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:32.956462   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:32.956427   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:32.956521   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:32.956425   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:32.956506   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061683   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.061723   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.061752   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.061790   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.061836   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.061893   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168519   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1104 12:08:33.168596   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1104 12:08:33.187540   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1104 12:08:33.188933   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.189015   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1104 12:08:33.199281   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1104 12:08:33.285086   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1104 12:08:33.285145   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1104 12:08:33.285245   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:33.285247   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.307647   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1104 12:08:33.307769   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:33.307784   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1104 12:08:33.307818   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1104 12:08:33.307869   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:33.312697   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1104 12:08:33.312808   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:33.314341   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1104 12:08:33.314358   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314396   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1104 12:08:33.314535   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1104 12:08:33.319449   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1104 12:08:33.319604   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1104 12:08:33.356390   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1104 12:08:33.356478   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1104 12:08:33.356569   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:33.512915   85500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:31.057314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:33.059599   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:32.350656   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:34.352338   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:31.938577   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.438561   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:32.938188   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.437856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:33.938433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.438381   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:34.938164   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.438120   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.937802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:36.438365   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:35.736963   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.42254522s)
	I1104 12:08:35.736994   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1104 12:08:35.737014   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737027   85500 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.380435224s)
	I1104 12:08:35.737058   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1104 12:08:35.737063   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1104 12:08:35.737104   85500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.224165247s)
	I1104 12:08:35.737156   85500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1104 12:08:35.737191   85500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.737267   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:08:37.693026   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.955928101s)
	I1104 12:08:37.693065   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1104 12:08:37.693086   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:37.693047   85500 ssh_runner.go:235] Completed: which crictl: (1.955763498s)
	I1104 12:08:37.693168   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:37.693131   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1104 12:08:39.156860   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.463570619s)
	I1104 12:08:39.156894   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1104 12:08:39.156922   85500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156930   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.463741565s)
	I1104 12:08:39.156980   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1104 12:08:39.156998   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:35.625930   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.057567   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.850619   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:38.851157   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:40.852272   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:36.938295   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.437646   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:37.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.438623   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:38.938662   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:39.938048   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.438404   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:40.938494   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:41.437875   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.701724   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.544718982s)
	I1104 12:08:42.701751   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1104 12:08:42.701771   85500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701810   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1104 12:08:42.701826   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.544784275s)
	I1104 12:08:42.701912   85500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:44.666599   85500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.964646885s)
	I1104 12:08:44.666653   85500 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1104 12:08:44.666723   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.964896366s)
	I1104 12:08:44.666744   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1104 12:08:44.666748   85500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:44.666765   85500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.666807   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1104 12:08:44.671475   85500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1104 12:08:40.556827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:42.557662   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.058481   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:43.351505   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:45.851360   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:41.938001   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.438702   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:42.938239   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.438469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:43.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.437744   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:44.938478   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.437757   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:45.938035   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.438173   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:46.627407   85500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.960571593s)
	I1104 12:08:46.627437   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1104 12:08:46.627473   85500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:46.627537   85500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1104 12:08:47.273537   85500 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19906-19898/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1104 12:08:47.273578   85500 cache_images.go:123] Successfully loaded all cached images
	I1104 12:08:47.273583   85500 cache_images.go:92] duration metric: took 14.719789832s to LoadCachedImages
	I1104 12:08:47.273594   85500 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.31.2 crio true true} ...
	I1104 12:08:47.273686   85500 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-908370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1104 12:08:47.273747   85500 ssh_runner.go:195] Run: crio config
	I1104 12:08:47.319888   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:47.319916   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:47.319929   85500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1104 12:08:47.319952   85500 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-908370 NodeName:no-preload-908370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1104 12:08:47.320098   85500 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-908370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1104 12:08:47.320185   85500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1104 12:08:47.330284   85500 binaries.go:44] Found k8s binaries, skipping transfer
	I1104 12:08:47.330352   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1104 12:08:47.340015   85500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1104 12:08:47.356601   85500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1104 12:08:47.371327   85500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1104 12:08:47.387251   85500 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I1104 12:08:47.391041   85500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1104 12:08:47.402283   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:47.527723   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:47.544017   85500 certs.go:68] Setting up /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370 for IP: 192.168.61.91
	I1104 12:08:47.544041   85500 certs.go:194] generating shared ca certs ...
	I1104 12:08:47.544060   85500 certs.go:226] acquiring lock for ca certs: {Name:mk4fb0469da6697654d0290fd25edddd463f47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:47.544244   85500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key
	I1104 12:08:47.544309   85500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key
	I1104 12:08:47.544322   85500 certs.go:256] generating profile certs ...
	I1104 12:08:47.544412   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.key
	I1104 12:08:47.544485   85500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key.890cb7f7
	I1104 12:08:47.544522   85500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key
	I1104 12:08:47.544626   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem (1338 bytes)
	W1104 12:08:47.544654   85500 certs.go:480] ignoring /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218_empty.pem, impossibly tiny 0 bytes
	I1104 12:08:47.544663   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca-key.pem (1675 bytes)
	I1104 12:08:47.544685   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/ca.pem (1078 bytes)
	I1104 12:08:47.544706   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/cert.pem (1123 bytes)
	I1104 12:08:47.544726   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/certs/key.pem (1679 bytes)
	I1104 12:08:47.544774   85500 certs.go:484] found cert: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem (1708 bytes)
	I1104 12:08:47.545439   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1104 12:08:47.588488   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1104 12:08:47.631341   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1104 12:08:47.666571   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1104 12:08:47.698703   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1104 12:08:47.725285   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1104 12:08:47.748890   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1104 12:08:47.775589   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1104 12:08:47.799507   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1104 12:08:47.823383   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/certs/27218.pem --> /usr/share/ca-certificates/27218.pem (1338 bytes)
	I1104 12:08:47.847515   85500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/ssl/certs/272182.pem --> /usr/share/ca-certificates/272182.pem (1708 bytes)
	I1104 12:08:47.869937   85500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1104 12:08:47.886413   85500 ssh_runner.go:195] Run: openssl version
	I1104 12:08:47.892041   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/272182.pem && ln -fs /usr/share/ca-certificates/272182.pem /etc/ssl/certs/272182.pem"
	I1104 12:08:47.901942   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906128   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  4 10:49 /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.906182   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/272182.pem
	I1104 12:08:47.911506   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/272182.pem /etc/ssl/certs/3ec20f2e.0"
	I1104 12:08:47.921614   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1104 12:08:47.932358   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936742   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  4 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.936801   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1104 12:08:47.942544   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1104 12:08:47.953063   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27218.pem && ln -fs /usr/share/ca-certificates/27218.pem /etc/ssl/certs/27218.pem"
	I1104 12:08:47.963293   85500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967487   85500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  4 10:49 /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.967547   85500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27218.pem
	I1104 12:08:47.972898   85500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27218.pem /etc/ssl/certs/51391683.0"
	I1104 12:08:47.983089   85500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1104 12:08:47.987532   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1104 12:08:47.993296   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1104 12:08:47.999021   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1104 12:08:48.004741   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1104 12:08:48.010227   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1104 12:08:48.015795   85500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1104 12:08:48.021356   85500 kubeadm.go:392] StartCluster: {Name:no-preload-908370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-908370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 12:08:48.021431   85500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1104 12:08:48.021471   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.057729   85500 cri.go:89] found id: ""
	I1104 12:08:48.057805   85500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1104 12:08:48.067591   85500 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1104 12:08:48.067610   85500 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1104 12:08:48.067663   85500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1104 12:08:48.076604   85500 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1104 12:08:48.077987   85500 kubeconfig.go:125] found "no-preload-908370" server: "https://192.168.61.91:8443"
	I1104 12:08:48.080042   85500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1104 12:08:48.089796   85500 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.91
	I1104 12:08:48.089826   85500 kubeadm.go:1160] stopping kube-system containers ...
	I1104 12:08:48.089838   85500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1104 12:08:48.089886   85500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1104 12:08:48.126920   85500 cri.go:89] found id: ""
	I1104 12:08:48.126998   85500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1104 12:08:48.143409   85500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:08:48.152783   85500 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:08:48.152809   85500 kubeadm.go:157] found existing configuration files:
	
	I1104 12:08:48.152858   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:08:48.161458   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:08:48.161542   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:08:48.170361   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:08:48.179217   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:08:48.179272   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:08:48.187834   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.196025   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:08:48.196079   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:08:48.204809   85500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:08:48.213280   85500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:08:48.213338   85500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:08:48.222672   85500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:08:48.232374   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:48.328999   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:49.920988   85500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.591954434s)
	I1104 12:08:49.921028   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.121679   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:50.181412   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:47.558137   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:49.559576   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:48.349974   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:50.350855   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:46.938016   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.438229   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:47.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.437950   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:48.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.437785   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:49.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.438413   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.938514   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.438658   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.253614   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:08:50.253693   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:50.754467   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.254553   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:51.271229   85500 api_server.go:72] duration metric: took 1.017613016s to wait for apiserver process to appear ...
	I1104 12:08:51.271255   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:08:51.271278   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:51.271794   85500 api_server.go:269] stopped: https://192.168.61.91:8443/healthz: Get "https://192.168.61.91:8443/healthz": dial tcp 192.168.61.91:8443: connect: connection refused
	I1104 12:08:51.771551   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.499268   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1104 12:08:54.499296   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1104 12:08:54.499310   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.617672   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.617699   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:54.771942   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:54.776588   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:54.776615   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:52.056678   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.057081   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:55.272332   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.276594   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1104 12:08:55.276621   85500 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1104 12:08:55.771423   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:08:55.776881   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:08:55.783842   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:08:55.783869   85500 api_server.go:131] duration metric: took 4.512606898s to wait for apiserver health ...
	I1104 12:08:55.783877   85500 cni.go:84] Creating CNI manager for ""
	I1104 12:08:55.783883   85500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 12:08:55.785665   85500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1104 12:08:52.351019   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:54.850354   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:51.938323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.438464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:52.937754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.438442   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:53.938586   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.438288   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:54.938444   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.438391   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.938546   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:56.438433   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:55.787083   85500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1104 12:08:55.801764   85500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1104 12:08:55.828371   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:08:55.847602   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:08:55.847653   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1104 12:08:55.847666   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1104 12:08:55.847679   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1104 12:08:55.847695   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1104 12:08:55.847707   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1104 12:08:55.847724   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1104 12:08:55.847733   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:08:55.847743   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1104 12:08:55.847753   85500 system_pods.go:74] duration metric: took 19.357387ms to wait for pod list to return data ...
	I1104 12:08:55.847762   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:08:55.856783   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:08:55.856820   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:08:55.856834   85500 node_conditions.go:105] duration metric: took 9.065755ms to run NodePressure ...
	I1104 12:08:55.856856   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1104 12:08:56.143012   85500 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148006   85500 kubeadm.go:739] kubelet initialised
	I1104 12:08:56.148026   85500 kubeadm.go:740] duration metric: took 4.987292ms waiting for restarted kubelet to initialise ...
	I1104 12:08:56.148034   85500 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:56.152359   85500 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.156700   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156725   85500 pod_ready.go:82] duration metric: took 4.341093ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.156734   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.156741   85500 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.161402   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161431   85500 pod_ready.go:82] duration metric: took 4.681838ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.161440   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "etcd-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.161447   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.165738   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165756   85500 pod_ready.go:82] duration metric: took 4.301197ms for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.165764   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-apiserver-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.165770   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.232568   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232598   85500 pod_ready.go:82] duration metric: took 66.818411ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.232610   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.232620   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:56.633774   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633804   85500 pod_ready.go:82] duration metric: took 401.173552ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:56.633815   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-proxy-w9hbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.633824   85500 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.032392   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032419   85500 pod_ready.go:82] duration metric: took 398.58729ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.032431   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "kube-scheduler-no-preload-908370" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.032439   85500 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:08:57.431940   85500 pod_ready.go:98] node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431976   85500 pod_ready.go:82] duration metric: took 399.525162ms for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:08:57.431987   85500 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-908370" hosting pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:57.431997   85500 pod_ready.go:39] duration metric: took 1.283953089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:08:57.432014   85500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1104 12:08:57.444821   85500 ops.go:34] apiserver oom_adj: -16
	I1104 12:08:57.444845   85500 kubeadm.go:597] duration metric: took 9.377227288s to restartPrimaryControlPlane
	I1104 12:08:57.444857   85500 kubeadm.go:394] duration metric: took 9.423506415s to StartCluster
	I1104 12:08:57.444879   85500 settings.go:142] acquiring lock: {Name:mk5550833c1f1a4ab4fbb2bf42ff8bd7f6341220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.444965   85500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 12:08:57.446715   85500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19906-19898/kubeconfig: {Name:mk164657c178e6abe086acb5c1cadb969cb2b0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1104 12:08:57.446981   85500 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1104 12:08:57.447059   85500 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1104 12:08:57.447172   85500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-908370"
	I1104 12:08:57.447193   85500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-908370"
	W1104 12:08:57.447202   85500 addons.go:243] addon storage-provisioner should already be in state true
	I1104 12:08:57.447207   85500 config.go:182] Loaded profile config "no-preload-908370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 12:08:57.447237   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447234   85500 addons.go:69] Setting default-storageclass=true in profile "no-preload-908370"
	I1104 12:08:57.447321   85500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-908370"
	I1104 12:08:57.447222   85500 addons.go:69] Setting metrics-server=true in profile "no-preload-908370"
	I1104 12:08:57.447418   85500 addons.go:234] Setting addon metrics-server=true in "no-preload-908370"
	W1104 12:08:57.447431   85500 addons.go:243] addon metrics-server should already be in state true
	I1104 12:08:57.447461   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.447708   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447792   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447813   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447748   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.447896   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.447853   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.449013   85500 out.go:177] * Verifying Kubernetes components...
	I1104 12:08:57.450774   85500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1104 12:08:57.469657   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I1104 12:08:57.470180   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.470801   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.470830   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.471277   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.471873   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.471924   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.485026   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1104 12:08:57.485330   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1104 12:08:57.485604   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.485772   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.486328   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486363   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486442   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.486471   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.486735   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.486847   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.487059   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.487337   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.487401   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.490138   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I1104 12:08:57.490611   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.490705   85500 addons.go:234] Setting addon default-storageclass=true in "no-preload-908370"
	W1104 12:08:57.490724   85500 addons.go:243] addon default-storageclass should already be in state true
	I1104 12:08:57.490748   85500 host.go:66] Checking if "no-preload-908370" exists ...
	I1104 12:08:57.491098   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.491140   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.491153   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.491177   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.491549   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.491718   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.493600   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.495883   85500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1104 12:08:57.497200   85500 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.497217   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1104 12:08:57.497245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.500402   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.500934   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.500960   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.501276   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.501483   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.501626   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.501775   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.508615   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I1104 12:08:57.509102   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.509582   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.509606   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.509948   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.510115   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.510809   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1104 12:08:57.511134   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.511818   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.511836   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.511868   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.512486   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.513456   85500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 12:08:57.513500   85500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 12:08:57.513921   85500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1104 12:08:57.515417   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1104 12:08:57.515434   85500 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1104 12:08:57.515461   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.518867   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519216   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.519241   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.519334   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.519523   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.519662   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.520124   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.529448   85500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I1104 12:08:57.529979   85500 main.go:141] libmachine: () Calling .GetVersion
	I1104 12:08:57.530374   85500 main.go:141] libmachine: Using API Version  1
	I1104 12:08:57.530389   85500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 12:08:57.530756   85500 main.go:141] libmachine: () Calling .GetMachineName
	I1104 12:08:57.530889   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetState
	I1104 12:08:57.532430   85500 main.go:141] libmachine: (no-preload-908370) Calling .DriverName
	I1104 12:08:57.532832   85500 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.532843   85500 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1104 12:08:57.532857   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHHostname
	I1104 12:08:57.535429   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535783   85500 main.go:141] libmachine: (no-preload-908370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:66:d5", ip: ""} in network mk-no-preload-908370: {Iface:virbr3 ExpiryTime:2024-11-04 13:08:23 +0000 UTC Type:0 Mac:52:54:00:f8:66:d5 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:no-preload-908370 Clientid:01:52:54:00:f8:66:d5}
	I1104 12:08:57.535809   85500 main.go:141] libmachine: (no-preload-908370) DBG | domain no-preload-908370 has defined IP address 192.168.61.91 and MAC address 52:54:00:f8:66:d5 in network mk-no-preload-908370
	I1104 12:08:57.535953   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHPort
	I1104 12:08:57.536148   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHKeyPath
	I1104 12:08:57.536245   85500 main.go:141] libmachine: (no-preload-908370) Calling .GetSSHUsername
	I1104 12:08:57.536388   85500 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/no-preload-908370/id_rsa Username:docker}
	I1104 12:08:57.635571   85500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1104 12:08:57.654984   85500 node_ready.go:35] waiting up to 6m0s for node "no-preload-908370" to be "Ready" ...
	I1104 12:08:57.722564   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1104 12:08:57.768850   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1104 12:08:57.791069   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1104 12:08:57.791090   85500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1104 12:08:57.875966   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1104 12:08:57.875997   85500 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1104 12:08:57.929834   85500 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:57.929867   85500 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1104 12:08:58.017927   85500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1104 12:08:58.732204   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732235   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732586   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.732614   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.732624   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.732635   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.732640   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.733045   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.733108   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.733084   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.736737   85500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014142064s)
	I1104 12:08:58.736783   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.736793   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737035   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737077   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.737090   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.737100   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.737737   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.737756   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.737770   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.740716   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.740735   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.740963   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.740974   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.740985   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987200   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987227   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987657   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.987667   85500 main.go:141] libmachine: (no-preload-908370) DBG | Closing plugin on server side
	I1104 12:08:58.987676   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.987685   85500 main.go:141] libmachine: Making call to close driver server
	I1104 12:08:58.987708   85500 main.go:141] libmachine: (no-preload-908370) Calling .Close
	I1104 12:08:58.987991   85500 main.go:141] libmachine: Successfully made call to close driver server
	I1104 12:08:58.988006   85500 main.go:141] libmachine: Making call to close connection to plugin binary
	I1104 12:08:58.988018   85500 addons.go:475] Verifying addon metrics-server=true in "no-preload-908370"
	I1104 12:08:58.989756   85500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1104 12:08:58.991022   85500 addons.go:510] duration metric: took 1.54397104s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1104 12:08:59.659284   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:08:56.057497   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.057767   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.850793   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:58.852058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:08:56.938312   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.437920   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:57.937779   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.438511   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:58.938464   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.438423   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:08:59.938450   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.438108   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:00.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:01.438356   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.158318   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:04.658719   85500 node_ready.go:53] node "no-preload-908370" has status "Ready":"False"
	I1104 12:09:05.159526   85500 node_ready.go:49] node "no-preload-908370" has status "Ready":"True"
	I1104 12:09:05.159553   85500 node_ready.go:38] duration metric: took 7.504528904s for node "no-preload-908370" to be "Ready" ...
	I1104 12:09:05.159564   85500 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:09:05.164838   85500 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173888   85500 pod_ready.go:93] pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.173909   85500 pod_ready.go:82] duration metric: took 9.046581ms for pod "coredns-7c65d6cfc9-vv4kq" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.173919   85500 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:00.556225   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:02.556893   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:05.055827   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.351472   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:03.851990   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:01.938447   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:02.938694   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:03.938445   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.438137   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:04.937941   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.438441   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.937760   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:06.438704   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:05.680754   85500 pod_ready.go:93] pod "etcd-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:05.680778   85500 pod_ready.go:82] duration metric: took 506.849735ms for pod "etcd-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:05.680804   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:07.687108   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:09.687377   85500 pod_ready.go:103] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:07.556024   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.055613   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.351230   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:08.351640   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:10.850364   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:06.937956   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.438323   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:07.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.438437   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:08.937675   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.437868   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:09.938053   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.438467   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.938703   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:11.438436   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:10.687315   85500 pod_ready.go:93] pod "kube-apiserver-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.687338   85500 pod_ready.go:82] duration metric: took 5.006527478s for pod "kube-apiserver-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.687348   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692554   85500 pod_ready.go:93] pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.692583   85500 pod_ready.go:82] duration metric: took 5.227048ms for pod "kube-controller-manager-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.692597   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697109   85500 pod_ready.go:93] pod "kube-proxy-w9hbz" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.697132   85500 pod_ready.go:82] duration metric: took 4.525205ms for pod "kube-proxy-w9hbz" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.697153   85500 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701450   85500 pod_ready.go:93] pod "kube-scheduler-no-preload-908370" in "kube-system" namespace has status "Ready":"True"
	I1104 12:09:10.701472   85500 pod_ready.go:82] duration metric: took 4.310973ms for pod "kube-scheduler-no-preload-908370" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:10.701483   85500 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	I1104 12:09:12.708631   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.708772   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.056161   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.556380   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:12.850721   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:14.851608   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:11.938465   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.437963   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:12.938515   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.437754   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:13.937856   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.438729   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:14.938439   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.438421   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:15.938044   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:16.438456   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.209025   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.707595   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.056226   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.555918   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:17.350266   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:19.350329   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:16.937807   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.438266   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:17.938153   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.437829   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:18.938469   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.438336   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:19.938284   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.438073   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:20.937894   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:21.438135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:21.438238   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:21.471463   86402 cri.go:89] found id: ""
	I1104 12:09:21.471495   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.471507   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:21.471515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:21.471568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:21.509336   86402 cri.go:89] found id: ""
	I1104 12:09:21.509363   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.509373   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:21.509381   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:21.509441   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:21.545963   86402 cri.go:89] found id: ""
	I1104 12:09:21.545987   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.545995   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:21.546000   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:21.546059   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:21.580707   86402 cri.go:89] found id: ""
	I1104 12:09:21.580737   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.580748   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:21.580755   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:21.580820   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:21.613763   86402 cri.go:89] found id: ""
	I1104 12:09:21.613791   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.613801   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:21.613809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:21.613872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:21.646559   86402 cri.go:89] found id: ""
	I1104 12:09:21.646583   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.646591   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:21.646597   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:21.646643   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:21.681439   86402 cri.go:89] found id: ""
	I1104 12:09:21.681467   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.681479   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:21.681486   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:21.681554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:21.708045   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.207686   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:22.055637   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:24.056458   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.350636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:23.850852   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:21.713875   86402 cri.go:89] found id: ""
	I1104 12:09:21.713899   86402 logs.go:282] 0 containers: []
	W1104 12:09:21.713907   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:21.713915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:21.713925   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:21.763882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:21.763918   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:21.778590   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:21.778615   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:21.892208   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:21.892235   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:21.892250   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:21.965946   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:21.965984   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:24.502992   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:24.514899   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:24.514960   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:24.554466   86402 cri.go:89] found id: ""
	I1104 12:09:24.554491   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.554501   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:24.554510   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:24.554567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:24.591532   86402 cri.go:89] found id: ""
	I1104 12:09:24.591560   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.591572   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:24.591580   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:24.591638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:24.625436   86402 cri.go:89] found id: ""
	I1104 12:09:24.625467   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.625478   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:24.625485   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:24.625544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:24.658317   86402 cri.go:89] found id: ""
	I1104 12:09:24.658346   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.658357   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:24.658364   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:24.658426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:24.692811   86402 cri.go:89] found id: ""
	I1104 12:09:24.692839   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.692850   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:24.692857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:24.692917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:24.729677   86402 cri.go:89] found id: ""
	I1104 12:09:24.729708   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.729719   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:24.729726   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:24.729773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:24.768575   86402 cri.go:89] found id: ""
	I1104 12:09:24.768598   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.768608   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:24.768615   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:24.768681   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:24.802344   86402 cri.go:89] found id: ""
	I1104 12:09:24.802368   86402 logs.go:282] 0 containers: []
	W1104 12:09:24.802375   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:24.802383   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:24.802394   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:24.855882   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:24.855915   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:24.869199   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:24.869243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:24.940720   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:24.940744   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:24.940758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:25.016139   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:25.016177   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:26.208422   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.208568   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.557513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:29.055769   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:26.350171   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:28.353001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:30.851153   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:27.553297   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:27.566857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:27.566913   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:27.599606   86402 cri.go:89] found id: ""
	I1104 12:09:27.599641   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.599653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:27.599661   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:27.599721   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:27.633818   86402 cri.go:89] found id: ""
	I1104 12:09:27.633841   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.633849   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:27.633854   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:27.633907   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:27.668088   86402 cri.go:89] found id: ""
	I1104 12:09:27.668120   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.668129   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:27.668135   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:27.668185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:27.699401   86402 cri.go:89] found id: ""
	I1104 12:09:27.699433   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.699445   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:27.699453   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:27.699511   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:27.731422   86402 cri.go:89] found id: ""
	I1104 12:09:27.731448   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.731459   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:27.731466   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:27.731528   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:27.762808   86402 cri.go:89] found id: ""
	I1104 12:09:27.762839   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.762850   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:27.762857   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:27.762917   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:27.794729   86402 cri.go:89] found id: ""
	I1104 12:09:27.794757   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.794765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:27.794771   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:27.794826   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:27.825694   86402 cri.go:89] found id: ""
	I1104 12:09:27.825716   86402 logs.go:282] 0 containers: []
	W1104 12:09:27.825724   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:27.825731   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:27.825742   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:27.862111   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:27.862140   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:27.911169   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:27.911204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:27.924207   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:27.924232   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:27.995123   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:27.995153   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:27.995167   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:30.580831   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:30.594901   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:30.594959   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:30.630936   86402 cri.go:89] found id: ""
	I1104 12:09:30.630961   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.630971   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:30.630979   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:30.631034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:30.669288   86402 cri.go:89] found id: ""
	I1104 12:09:30.669311   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.669320   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:30.669328   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:30.669388   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:30.706288   86402 cri.go:89] found id: ""
	I1104 12:09:30.706312   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.706319   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:30.706325   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:30.706384   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:30.739027   86402 cri.go:89] found id: ""
	I1104 12:09:30.739057   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.739069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:30.739078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:30.739137   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:30.772247   86402 cri.go:89] found id: ""
	I1104 12:09:30.772272   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.772280   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:30.772286   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:30.772338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:30.810327   86402 cri.go:89] found id: ""
	I1104 12:09:30.810360   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.810370   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:30.810375   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:30.810426   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:30.842241   86402 cri.go:89] found id: ""
	I1104 12:09:30.842271   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.842279   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:30.842285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:30.842332   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:30.877003   86402 cri.go:89] found id: ""
	I1104 12:09:30.877032   86402 logs.go:282] 0 containers: []
	W1104 12:09:30.877043   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:30.877052   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:30.877077   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:30.925783   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:30.925816   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:30.939651   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:30.939680   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:31.029176   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:31.029210   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:31.029244   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:31.116311   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:31.116348   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:30.708451   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:32.708661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:31.056627   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.056743   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.057986   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.350420   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:35.351206   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:33.653267   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:33.665813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:33.665878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:33.701812   86402 cri.go:89] found id: ""
	I1104 12:09:33.701839   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.701852   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:33.701860   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:33.701922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:33.738816   86402 cri.go:89] found id: ""
	I1104 12:09:33.738850   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.738861   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:33.738868   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:33.738928   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:33.773936   86402 cri.go:89] found id: ""
	I1104 12:09:33.773960   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.773968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:33.773976   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:33.774031   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:33.808049   86402 cri.go:89] found id: ""
	I1104 12:09:33.808079   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.808091   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:33.808098   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:33.808154   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:33.844276   86402 cri.go:89] found id: ""
	I1104 12:09:33.844303   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.844314   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:33.844322   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:33.844443   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:33.879736   86402 cri.go:89] found id: ""
	I1104 12:09:33.879772   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.879782   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:33.879788   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:33.879843   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:33.913717   86402 cri.go:89] found id: ""
	I1104 12:09:33.913750   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.913761   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:33.913769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:33.913832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:33.949632   86402 cri.go:89] found id: ""
	I1104 12:09:33.949658   86402 logs.go:282] 0 containers: []
	W1104 12:09:33.949667   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:33.949677   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:33.949691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:34.019770   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:34.019790   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:34.019806   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:34.101493   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:34.101524   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:34.146723   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:34.146751   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:34.196295   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:34.196338   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:35.207223   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.207576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.208091   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.556228   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.556548   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:37.850907   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:39.852870   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:36.709951   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:36.724723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:36.724782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:36.777406   86402 cri.go:89] found id: ""
	I1104 12:09:36.777440   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.777451   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:36.777459   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:36.777520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:36.834486   86402 cri.go:89] found id: ""
	I1104 12:09:36.834516   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.834527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:36.834535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:36.834641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:36.868828   86402 cri.go:89] found id: ""
	I1104 12:09:36.868853   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.868861   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:36.868867   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:36.868912   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:36.900942   86402 cri.go:89] found id: ""
	I1104 12:09:36.900972   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.900980   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:36.900986   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:36.901043   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:36.933215   86402 cri.go:89] found id: ""
	I1104 12:09:36.933265   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.933276   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:36.933282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:36.933330   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:36.966753   86402 cri.go:89] found id: ""
	I1104 12:09:36.966776   86402 logs.go:282] 0 containers: []
	W1104 12:09:36.966784   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:36.966789   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:36.966850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:37.000050   86402 cri.go:89] found id: ""
	I1104 12:09:37.000074   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.000082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:37.000087   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:37.000144   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:37.033252   86402 cri.go:89] found id: ""
	I1104 12:09:37.033283   86402 logs.go:282] 0 containers: []
	W1104 12:09:37.033295   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:37.033305   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:37.033328   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:37.085351   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:37.085383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:37.098556   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:37.098582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:37.167489   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:37.167512   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:37.167525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:37.243292   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:37.243325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:39.781468   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:39.795630   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:39.795756   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:39.833745   86402 cri.go:89] found id: ""
	I1104 12:09:39.833779   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.833791   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:39.833798   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:39.833862   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:39.870075   86402 cri.go:89] found id: ""
	I1104 12:09:39.870096   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.870106   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:39.870119   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:39.870173   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:39.905807   86402 cri.go:89] found id: ""
	I1104 12:09:39.905836   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.905846   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:39.905854   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:39.905916   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:39.941890   86402 cri.go:89] found id: ""
	I1104 12:09:39.941914   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.941922   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:39.941932   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:39.941978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:39.979123   86402 cri.go:89] found id: ""
	I1104 12:09:39.979150   86402 logs.go:282] 0 containers: []
	W1104 12:09:39.979159   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:39.979165   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:39.979220   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:40.014748   86402 cri.go:89] found id: ""
	I1104 12:09:40.014777   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.014785   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:40.014791   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:40.014882   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:40.049977   86402 cri.go:89] found id: ""
	I1104 12:09:40.050004   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.050014   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:40.050021   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:40.050100   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:40.085630   86402 cri.go:89] found id: ""
	I1104 12:09:40.085663   86402 logs.go:282] 0 containers: []
	W1104 12:09:40.085674   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:40.085685   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:40.085701   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:40.166611   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:40.166650   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:40.203117   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:40.203155   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:40.256233   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:40.256267   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:40.270009   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:40.270042   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:40.338672   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:41.707618   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:43.708915   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.055555   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.060949   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.351562   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:44.851599   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:42.839402   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:42.852881   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:42.852947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:42.884587   86402 cri.go:89] found id: ""
	I1104 12:09:42.884614   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.884624   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:42.884631   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:42.884690   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:42.915286   86402 cri.go:89] found id: ""
	I1104 12:09:42.915316   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.915327   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:42.915337   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:42.915399   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:42.945827   86402 cri.go:89] found id: ""
	I1104 12:09:42.945857   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.945868   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:42.945875   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:42.945934   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:42.982662   86402 cri.go:89] found id: ""
	I1104 12:09:42.982693   86402 logs.go:282] 0 containers: []
	W1104 12:09:42.982703   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:42.982712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:42.982788   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:43.015337   86402 cri.go:89] found id: ""
	I1104 12:09:43.015371   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.015382   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:43.015390   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:43.015453   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:43.048235   86402 cri.go:89] found id: ""
	I1104 12:09:43.048262   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.048270   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:43.048276   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:43.048351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:43.080636   86402 cri.go:89] found id: ""
	I1104 12:09:43.080668   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.080679   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:43.080687   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:43.080746   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:43.113986   86402 cri.go:89] found id: ""
	I1104 12:09:43.114011   86402 logs.go:282] 0 containers: []
	W1104 12:09:43.114019   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:43.114027   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:43.114038   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:43.165356   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:43.165390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:43.179167   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:43.179200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:43.250054   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:43.250083   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:43.250098   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:43.328970   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:43.329002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:45.869879   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:45.883262   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:45.883359   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:45.921978   86402 cri.go:89] found id: ""
	I1104 12:09:45.922003   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.922011   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:45.922016   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:45.922076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:45.954668   86402 cri.go:89] found id: ""
	I1104 12:09:45.954697   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.954710   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:45.954717   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:45.954787   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:45.987793   86402 cri.go:89] found id: ""
	I1104 12:09:45.987826   86402 logs.go:282] 0 containers: []
	W1104 12:09:45.987837   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:45.987845   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:45.987906   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:46.028517   86402 cri.go:89] found id: ""
	I1104 12:09:46.028550   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.028558   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:46.028563   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:46.028621   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:46.063832   86402 cri.go:89] found id: ""
	I1104 12:09:46.063859   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.063870   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:46.063878   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:46.063942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:46.099981   86402 cri.go:89] found id: ""
	I1104 12:09:46.100011   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.100027   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:46.100036   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:46.100169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:46.133060   86402 cri.go:89] found id: ""
	I1104 12:09:46.133083   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.133092   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:46.133099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:46.133165   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:46.170559   86402 cri.go:89] found id: ""
	I1104 12:09:46.170583   86402 logs.go:282] 0 containers: []
	W1104 12:09:46.170591   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:46.170599   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:46.170610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:46.253202   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:46.253253   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:46.288468   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:46.288498   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:46.339322   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:46.339354   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:46.353020   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:46.353049   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:46.420328   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:46.208695   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:46.556598   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.057461   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:47.351225   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:49.352737   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:48.920709   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:48.933443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:48.933507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:48.964736   86402 cri.go:89] found id: ""
	I1104 12:09:48.964759   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.964770   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:48.964777   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:48.964837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:48.996646   86402 cri.go:89] found id: ""
	I1104 12:09:48.996670   86402 logs.go:282] 0 containers: []
	W1104 12:09:48.996679   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:48.996684   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:48.996734   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:49.028899   86402 cri.go:89] found id: ""
	I1104 12:09:49.028942   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.028951   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:49.028957   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:49.029015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:49.065032   86402 cri.go:89] found id: ""
	I1104 12:09:49.065056   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.065064   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:49.065075   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:49.065120   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:49.097159   86402 cri.go:89] found id: ""
	I1104 12:09:49.097183   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.097191   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:49.097196   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:49.097269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:49.131578   86402 cri.go:89] found id: ""
	I1104 12:09:49.131608   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.131619   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:49.131626   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:49.131684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:49.164307   86402 cri.go:89] found id: ""
	I1104 12:09:49.164339   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.164358   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:49.164367   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:49.164430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:49.197171   86402 cri.go:89] found id: ""
	I1104 12:09:49.197199   86402 logs.go:282] 0 containers: []
	W1104 12:09:49.197210   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:49.197220   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:49.197251   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:49.210327   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:49.210355   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:49.280226   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:49.280251   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:49.280262   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:49.367655   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:49.367691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:49.408424   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:49.408452   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:50.708963   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:53.207337   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.555800   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.055622   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.850949   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:54.350551   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:51.958148   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:51.970451   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:51.970521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:52.000896   86402 cri.go:89] found id: ""
	I1104 12:09:52.000929   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.000940   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:52.000948   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:52.001023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:52.034122   86402 cri.go:89] found id: ""
	I1104 12:09:52.034150   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.034161   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:52.034168   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:52.034227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:52.070834   86402 cri.go:89] found id: ""
	I1104 12:09:52.070872   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.070884   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:52.070891   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:52.070950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:52.103730   86402 cri.go:89] found id: ""
	I1104 12:09:52.103758   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.103766   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:52.103772   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:52.103832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:52.135980   86402 cri.go:89] found id: ""
	I1104 12:09:52.136006   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.136014   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:52.136020   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:52.136081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:52.168903   86402 cri.go:89] found id: ""
	I1104 12:09:52.168928   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.168936   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:52.168942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:52.169001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:52.199499   86402 cri.go:89] found id: ""
	I1104 12:09:52.199529   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.199539   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:52.199546   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:52.199610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:52.232566   86402 cri.go:89] found id: ""
	I1104 12:09:52.232603   86402 logs.go:282] 0 containers: []
	W1104 12:09:52.232615   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:52.232626   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:52.232640   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:52.282140   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:52.282180   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:52.295079   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:52.295110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:52.364061   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:52.364087   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:52.364102   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:52.437868   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:52.437901   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:54.978182   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:54.991002   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:54.991068   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:55.023628   86402 cri.go:89] found id: ""
	I1104 12:09:55.023656   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.023663   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:55.023669   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:55.023715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:55.058524   86402 cri.go:89] found id: ""
	I1104 12:09:55.058548   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.058557   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:55.058564   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:55.058634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:55.095730   86402 cri.go:89] found id: ""
	I1104 12:09:55.095760   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.095772   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:55.095779   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:55.095837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:55.128341   86402 cri.go:89] found id: ""
	I1104 12:09:55.128365   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.128373   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:55.128379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:55.128438   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:55.160655   86402 cri.go:89] found id: ""
	I1104 12:09:55.160681   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.160693   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:55.160700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:55.160754   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:55.194050   86402 cri.go:89] found id: ""
	I1104 12:09:55.194077   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.194086   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:55.194091   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:55.194138   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:55.227655   86402 cri.go:89] found id: ""
	I1104 12:09:55.227694   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.227705   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:55.227712   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:55.227810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:55.261106   86402 cri.go:89] found id: ""
	I1104 12:09:55.261137   86402 logs.go:282] 0 containers: []
	W1104 12:09:55.261147   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:55.261157   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:55.261171   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:55.335577   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:55.335598   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:55.335610   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:55.421339   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:55.421375   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:09:55.459936   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:55.459967   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:55.509346   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:55.509382   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:55.208869   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:57.707576   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:59.708019   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.555996   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.556335   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:56.851071   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.851254   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:09:58.023608   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:09:58.036540   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:09:58.036599   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:09:58.075104   86402 cri.go:89] found id: ""
	I1104 12:09:58.075182   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.075198   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:09:58.075207   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:09:58.075271   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:09:58.109910   86402 cri.go:89] found id: ""
	I1104 12:09:58.109949   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.109961   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:09:58.109968   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:09:58.110038   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:09:58.142829   86402 cri.go:89] found id: ""
	I1104 12:09:58.142854   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.142865   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:09:58.142873   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:09:58.142924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:09:58.178125   86402 cri.go:89] found id: ""
	I1104 12:09:58.178153   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.178161   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:09:58.178168   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:09:58.178239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:09:58.214117   86402 cri.go:89] found id: ""
	I1104 12:09:58.214146   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.214156   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:09:58.214162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:09:58.214213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:09:58.244728   86402 cri.go:89] found id: ""
	I1104 12:09:58.244751   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.244759   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:09:58.244765   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:09:58.244809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:09:58.275542   86402 cri.go:89] found id: ""
	I1104 12:09:58.275568   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.275576   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:09:58.275582   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:09:58.275630   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:09:58.314909   86402 cri.go:89] found id: ""
	I1104 12:09:58.314935   86402 logs.go:282] 0 containers: []
	W1104 12:09:58.314943   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:09:58.314952   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:09:58.314962   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:09:58.364361   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:09:58.364390   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:09:58.378483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:09:58.378517   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:09:58.442012   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:09:58.442033   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:09:58.442045   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:09:58.517260   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:09:58.517298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.057203   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:01.069937   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:01.070008   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:01.101672   86402 cri.go:89] found id: ""
	I1104 12:10:01.101698   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.101709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:01.101716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:01.101779   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:01.134672   86402 cri.go:89] found id: ""
	I1104 12:10:01.134701   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.134712   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:01.134719   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:01.134789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:01.167784   86402 cri.go:89] found id: ""
	I1104 12:10:01.167833   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.167845   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:01.167853   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:01.167945   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:01.201218   86402 cri.go:89] found id: ""
	I1104 12:10:01.201260   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.201271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:01.201281   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:01.201338   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:01.234964   86402 cri.go:89] found id: ""
	I1104 12:10:01.234991   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.235000   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:01.235007   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:01.235069   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:01.267809   86402 cri.go:89] found id: ""
	I1104 12:10:01.267848   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.267881   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:01.267890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:01.267942   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:01.303567   86402 cri.go:89] found id: ""
	I1104 12:10:01.303590   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.303598   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:01.303604   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:01.303648   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:01.342059   86402 cri.go:89] found id: ""
	I1104 12:10:01.342088   86402 logs.go:282] 0 containers: []
	W1104 12:10:01.342099   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:01.342109   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:01.342142   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:01.354845   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:01.354867   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:01.423426   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:01.423447   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:01.423459   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:01.498979   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:01.499018   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:01.537658   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:01.537691   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:02.208192   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.209058   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.055266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.056457   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:01.350820   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:03.850435   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:04.088653   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:04.103506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:04.103576   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:04.137574   86402 cri.go:89] found id: ""
	I1104 12:10:04.137602   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.137612   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:04.137620   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:04.137684   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:04.177624   86402 cri.go:89] found id: ""
	I1104 12:10:04.177662   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.177673   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:04.177681   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:04.177750   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:04.213829   86402 cri.go:89] found id: ""
	I1104 12:10:04.213850   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.213862   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:04.213870   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:04.213929   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:04.251112   86402 cri.go:89] found id: ""
	I1104 12:10:04.251143   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.251154   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:04.251162   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:04.251227   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:04.286005   86402 cri.go:89] found id: ""
	I1104 12:10:04.286036   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.286046   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:04.286053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:04.286118   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:04.317628   86402 cri.go:89] found id: ""
	I1104 12:10:04.317656   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.317667   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:04.317674   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:04.317742   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:04.351663   86402 cri.go:89] found id: ""
	I1104 12:10:04.351687   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.351695   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:04.351700   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:04.351755   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:04.385818   86402 cri.go:89] found id: ""
	I1104 12:10:04.385842   86402 logs.go:282] 0 containers: []
	W1104 12:10:04.385850   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:04.385858   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:04.385880   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:04.467141   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:04.467179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:04.503669   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:04.503700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:04.557237   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:04.557303   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:04.570484   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:04.570520   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:04.635099   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:06.708483   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:09.207171   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:05.556612   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.056976   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:06.350422   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:08.351537   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.351962   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:07.135741   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:07.148039   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:07.148132   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:07.185171   86402 cri.go:89] found id: ""
	I1104 12:10:07.185196   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.185205   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:07.185211   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:07.185280   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:07.217097   86402 cri.go:89] found id: ""
	I1104 12:10:07.217126   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.217137   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:07.217144   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:07.217204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:07.250079   86402 cri.go:89] found id: ""
	I1104 12:10:07.250108   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.250116   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:07.250121   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:07.250169   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:07.283423   86402 cri.go:89] found id: ""
	I1104 12:10:07.283463   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.283475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:07.283482   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:07.283554   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:07.316461   86402 cri.go:89] found id: ""
	I1104 12:10:07.316490   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.316507   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:07.316513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:07.316569   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:07.361981   86402 cri.go:89] found id: ""
	I1104 12:10:07.362010   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.362018   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:07.362024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:07.362087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:07.397834   86402 cri.go:89] found id: ""
	I1104 12:10:07.397867   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.397878   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:07.397886   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:07.397948   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:07.429379   86402 cri.go:89] found id: ""
	I1104 12:10:07.429407   86402 logs.go:282] 0 containers: []
	W1104 12:10:07.429416   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:07.429425   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:07.429438   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:07.495294   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:07.495322   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:07.495334   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:07.578504   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:07.578546   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:07.617172   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:07.617201   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:07.667168   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:07.667204   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.181802   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:10.196017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:10.196084   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:10.228243   86402 cri.go:89] found id: ""
	I1104 12:10:10.228272   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.228282   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:10.228289   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:10.228347   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:10.262110   86402 cri.go:89] found id: ""
	I1104 12:10:10.262143   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.262152   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:10.262161   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:10.262218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:10.297776   86402 cri.go:89] found id: ""
	I1104 12:10:10.297812   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.297823   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:10.297830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:10.297877   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:10.332645   86402 cri.go:89] found id: ""
	I1104 12:10:10.332672   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.332680   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:10.332685   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:10.332730   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:10.366703   86402 cri.go:89] found id: ""
	I1104 12:10:10.366735   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.366746   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:10.366754   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:10.366809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:10.399500   86402 cri.go:89] found id: ""
	I1104 12:10:10.399526   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.399534   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:10.399539   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:10.399634   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:10.434898   86402 cri.go:89] found id: ""
	I1104 12:10:10.434932   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.434943   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:10.434951   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:10.435022   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:10.472159   86402 cri.go:89] found id: ""
	I1104 12:10:10.472189   86402 logs.go:282] 0 containers: []
	W1104 12:10:10.472201   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:10.472225   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:10.472246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:10.528710   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:10.528769   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:10.541943   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:10.541973   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:10.621819   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:10.621843   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:10.621855   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:10.698301   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:10.698335   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:11.208069   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.707594   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:10.556520   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.056160   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:15.056984   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:12.851001   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:14.851591   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:13.235151   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:13.247511   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:13.247585   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:13.278546   86402 cri.go:89] found id: ""
	I1104 12:10:13.278576   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.278586   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:13.278592   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:13.278655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:13.310297   86402 cri.go:89] found id: ""
	I1104 12:10:13.310325   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.310334   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:13.310340   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:13.310394   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:13.344110   86402 cri.go:89] found id: ""
	I1104 12:10:13.344139   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.344150   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:13.344158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:13.344210   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:13.379778   86402 cri.go:89] found id: ""
	I1104 12:10:13.379806   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.379817   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:13.379824   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:13.379872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:13.411763   86402 cri.go:89] found id: ""
	I1104 12:10:13.411795   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.411806   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:13.411813   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:13.411872   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:13.445192   86402 cri.go:89] found id: ""
	I1104 12:10:13.445217   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.445235   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:13.445243   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:13.445297   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:13.478518   86402 cri.go:89] found id: ""
	I1104 12:10:13.478549   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.478561   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:13.478569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:13.478710   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:13.513852   86402 cri.go:89] found id: ""
	I1104 12:10:13.513878   86402 logs.go:282] 0 containers: []
	W1104 12:10:13.513886   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:13.513895   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:13.513909   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:13.590413   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:13.590439   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:13.590454   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:13.664575   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:13.664608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:13.700616   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:13.700644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:13.751113   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:13.751147   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:16.264311   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:16.277443   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:16.277508   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:16.309983   86402 cri.go:89] found id: ""
	I1104 12:10:16.310010   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.310020   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:16.310025   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:16.310073   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:16.358281   86402 cri.go:89] found id: ""
	I1104 12:10:16.358305   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.358312   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:16.358317   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:16.358376   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:16.394455   86402 cri.go:89] found id: ""
	I1104 12:10:16.394485   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.394497   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:16.394503   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:16.394571   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:16.430606   86402 cri.go:89] found id: ""
	I1104 12:10:16.430638   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.430648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:16.430655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:16.430716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:16.464402   86402 cri.go:89] found id: ""
	I1104 12:10:16.464439   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.464450   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:16.464458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:16.464517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:16.497985   86402 cri.go:89] found id: ""
	I1104 12:10:16.498009   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.498017   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:16.498022   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:16.498076   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:16.531255   86402 cri.go:89] found id: ""
	I1104 12:10:16.531289   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.531301   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:16.531309   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:16.531372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:16.566176   86402 cri.go:89] found id: ""
	I1104 12:10:16.566204   86402 logs.go:282] 0 containers: []
	W1104 12:10:16.566213   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:16.566228   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:16.566243   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:16.634157   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:16.634196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:16.634218   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:16.206939   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:18.208360   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.555513   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.556105   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:17.351026   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:19.351294   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:16.710518   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:16.710550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:16.746572   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:16.746608   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:16.797146   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:16.797179   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.310286   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:19.323409   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:19.323473   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:19.360864   86402 cri.go:89] found id: ""
	I1104 12:10:19.360893   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.360902   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:19.360907   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:19.360962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:19.400127   86402 cri.go:89] found id: ""
	I1104 12:10:19.400155   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.400167   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:19.400174   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:19.400230   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:19.433023   86402 cri.go:89] found id: ""
	I1104 12:10:19.433049   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.433057   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:19.433062   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:19.433123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:19.467786   86402 cri.go:89] found id: ""
	I1104 12:10:19.467810   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.467819   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:19.467825   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:19.467875   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:19.498411   86402 cri.go:89] found id: ""
	I1104 12:10:19.498436   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.498444   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:19.498455   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:19.498502   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:19.532146   86402 cri.go:89] found id: ""
	I1104 12:10:19.532171   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.532179   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:19.532184   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:19.532234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:19.567271   86402 cri.go:89] found id: ""
	I1104 12:10:19.567294   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.567302   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:19.567308   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:19.567369   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:19.608233   86402 cri.go:89] found id: ""
	I1104 12:10:19.608265   86402 logs.go:282] 0 containers: []
	W1104 12:10:19.608279   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:19.608289   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:19.608304   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:19.649039   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:19.649071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:19.702129   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:19.702168   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:19.716749   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:19.716776   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:19.787538   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:19.787560   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:19.787572   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:20.208694   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.708289   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.556715   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.557173   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:21.851010   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:23.852944   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:22.368982   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:22.382889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:22.382962   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:22.418672   86402 cri.go:89] found id: ""
	I1104 12:10:22.418698   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.418709   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:22.418716   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:22.418782   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:22.451675   86402 cri.go:89] found id: ""
	I1104 12:10:22.451704   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.451715   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:22.451723   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:22.451785   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:22.488520   86402 cri.go:89] found id: ""
	I1104 12:10:22.488549   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.488561   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:22.488567   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:22.488631   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:22.530288   86402 cri.go:89] found id: ""
	I1104 12:10:22.530312   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.530321   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:22.530326   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:22.530382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:22.564929   86402 cri.go:89] found id: ""
	I1104 12:10:22.564958   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.564970   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:22.564977   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:22.565036   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:22.598015   86402 cri.go:89] found id: ""
	I1104 12:10:22.598042   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.598051   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:22.598056   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:22.598160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:22.632894   86402 cri.go:89] found id: ""
	I1104 12:10:22.632921   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.632930   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:22.632935   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:22.633001   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:22.665194   86402 cri.go:89] found id: ""
	I1104 12:10:22.665218   86402 logs.go:282] 0 containers: []
	W1104 12:10:22.665245   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:22.665257   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:22.665272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:22.717731   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:22.717763   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:22.732671   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:22.732698   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:22.823908   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:22.823946   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:22.823963   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:22.907812   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:22.907848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.449308   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:25.461694   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:25.461751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:25.493036   86402 cri.go:89] found id: ""
	I1104 12:10:25.493061   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.493068   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:25.493075   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:25.493122   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:25.525084   86402 cri.go:89] found id: ""
	I1104 12:10:25.525116   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.525128   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:25.525135   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:25.525196   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:25.561380   86402 cri.go:89] found id: ""
	I1104 12:10:25.561424   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.561436   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:25.561444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:25.561499   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:25.595429   86402 cri.go:89] found id: ""
	I1104 12:10:25.595453   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.595468   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:25.595474   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:25.595521   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:25.627409   86402 cri.go:89] found id: ""
	I1104 12:10:25.627436   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.627445   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:25.627450   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:25.627497   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:25.661048   86402 cri.go:89] found id: ""
	I1104 12:10:25.661073   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.661082   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:25.661088   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:25.661135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:25.698882   86402 cri.go:89] found id: ""
	I1104 12:10:25.698912   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.698920   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:25.698926   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:25.698978   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:25.733355   86402 cri.go:89] found id: ""
	I1104 12:10:25.733397   86402 logs.go:282] 0 containers: []
	W1104 12:10:25.733409   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:25.733420   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:25.733435   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:25.784871   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:25.784908   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:25.798715   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:25.798740   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:25.870362   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:25.870383   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:25.870397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:25.950565   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:25.950598   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:25.209496   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:27.706991   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:29.708209   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.055597   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.055845   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:30.056584   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:26.351027   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.851204   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:28.488258   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:28.506058   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:28.506114   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:28.566325   86402 cri.go:89] found id: ""
	I1104 12:10:28.566351   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.566358   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:28.566364   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:28.566413   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:28.612753   86402 cri.go:89] found id: ""
	I1104 12:10:28.612781   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.612790   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:28.612796   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:28.612854   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:28.647082   86402 cri.go:89] found id: ""
	I1104 12:10:28.647109   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.647120   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:28.647128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:28.647205   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:28.683197   86402 cri.go:89] found id: ""
	I1104 12:10:28.683227   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.683239   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:28.683247   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:28.683299   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:28.718139   86402 cri.go:89] found id: ""
	I1104 12:10:28.718175   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.718186   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:28.718194   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:28.718253   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:28.749689   86402 cri.go:89] found id: ""
	I1104 12:10:28.749721   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.749732   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:28.749739   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:28.749803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:28.786824   86402 cri.go:89] found id: ""
	I1104 12:10:28.786851   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.786859   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:28.786864   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:28.786925   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:28.822833   86402 cri.go:89] found id: ""
	I1104 12:10:28.822856   86402 logs.go:282] 0 containers: []
	W1104 12:10:28.822865   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:28.822872   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:28.822884   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:28.835267   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:28.835298   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:28.900051   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:28.900076   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:28.900089   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:28.979867   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:28.979912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:29.017294   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:29.017327   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:31.569559   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:31.582065   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:31.582136   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:31.614924   86402 cri.go:89] found id: ""
	I1104 12:10:31.614952   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.614960   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:31.614966   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:31.615029   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:31.647178   86402 cri.go:89] found id: ""
	I1104 12:10:31.647204   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.647212   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:31.647218   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:31.647277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:31.678723   86402 cri.go:89] found id: ""
	I1104 12:10:31.678749   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.678761   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:31.678769   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:31.678819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:31.709787   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.208234   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:32.555978   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:34.557026   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.351700   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:33.850976   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:35.851636   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:31.713013   86402 cri.go:89] found id: ""
	I1104 12:10:31.713036   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.713043   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:31.713048   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:31.713092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:31.746564   86402 cri.go:89] found id: ""
	I1104 12:10:31.746591   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.746600   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:31.746605   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:31.746658   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:31.779559   86402 cri.go:89] found id: ""
	I1104 12:10:31.779586   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.779594   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:31.779601   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:31.779652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:31.812047   86402 cri.go:89] found id: ""
	I1104 12:10:31.812076   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.812087   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:31.812094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:31.812163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:31.845479   86402 cri.go:89] found id: ""
	I1104 12:10:31.845510   86402 logs.go:282] 0 containers: []
	W1104 12:10:31.845522   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:31.845532   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:31.845551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:31.909399   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:31.909423   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:31.909434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:31.985994   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:31.986031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:32.023222   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:32.023255   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:32.074429   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:32.074467   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.588202   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:34.600925   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:34.600994   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:34.632718   86402 cri.go:89] found id: ""
	I1104 12:10:34.632743   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.632754   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:34.632763   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:34.632813   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:34.665553   86402 cri.go:89] found id: ""
	I1104 12:10:34.665576   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.665585   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:34.665590   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:34.665641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:34.700059   86402 cri.go:89] found id: ""
	I1104 12:10:34.700081   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.700089   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:34.700094   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:34.700141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:34.732940   86402 cri.go:89] found id: ""
	I1104 12:10:34.732962   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.732970   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:34.732978   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:34.733023   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:34.764580   86402 cri.go:89] found id: ""
	I1104 12:10:34.764610   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.764618   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:34.764624   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:34.764680   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:34.798030   86402 cri.go:89] found id: ""
	I1104 12:10:34.798053   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.798061   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:34.798067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:34.798115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:34.829847   86402 cri.go:89] found id: ""
	I1104 12:10:34.829876   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.829884   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:34.829889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:34.829946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:34.862764   86402 cri.go:89] found id: ""
	I1104 12:10:34.862792   86402 logs.go:282] 0 containers: []
	W1104 12:10:34.862804   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:34.862815   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:34.862828   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:34.912367   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:34.912397   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:34.925347   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:34.925383   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:34.990459   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:34.990486   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:34.990502   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:35.066765   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:35.066796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:36.706912   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.707144   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.056279   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:39.555433   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:38.349986   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:40.354694   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:37.602696   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:37.615041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:37.615115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:37.646872   86402 cri.go:89] found id: ""
	I1104 12:10:37.646900   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.646911   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:37.646918   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:37.646977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:37.679770   86402 cri.go:89] found id: ""
	I1104 12:10:37.679797   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.679805   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:37.679810   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:37.679867   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:37.711693   86402 cri.go:89] found id: ""
	I1104 12:10:37.711720   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.711733   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:37.711743   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:37.711803   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:37.746605   86402 cri.go:89] found id: ""
	I1104 12:10:37.746636   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.746648   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:37.746656   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:37.746716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:37.778983   86402 cri.go:89] found id: ""
	I1104 12:10:37.779010   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.779020   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:37.779026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:37.779086   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:37.813293   86402 cri.go:89] found id: ""
	I1104 12:10:37.813321   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.813330   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:37.813335   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:37.813387   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:37.846181   86402 cri.go:89] found id: ""
	I1104 12:10:37.846209   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.846219   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:37.846226   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:37.846287   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:37.877485   86402 cri.go:89] found id: ""
	I1104 12:10:37.877520   86402 logs.go:282] 0 containers: []
	W1104 12:10:37.877531   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:37.877541   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:37.877558   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:37.926704   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:37.926733   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:37.939771   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:37.939796   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:38.003762   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:38.003783   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:38.003800   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:38.085419   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:38.085456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.625351   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:40.637380   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:40.637459   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:40.670274   86402 cri.go:89] found id: ""
	I1104 12:10:40.670303   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.670315   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:40.670322   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:40.670382   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:40.703383   86402 cri.go:89] found id: ""
	I1104 12:10:40.703414   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.703427   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:40.703434   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:40.703481   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:40.739549   86402 cri.go:89] found id: ""
	I1104 12:10:40.739576   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.739586   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:40.739594   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:40.739651   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:40.775466   86402 cri.go:89] found id: ""
	I1104 12:10:40.775492   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.775502   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:40.775513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:40.775567   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:40.810486   86402 cri.go:89] found id: ""
	I1104 12:10:40.810515   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.810525   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:40.810533   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:40.810593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:40.844277   86402 cri.go:89] found id: ""
	I1104 12:10:40.844309   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.844321   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:40.844329   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:40.844391   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:40.878699   86402 cri.go:89] found id: ""
	I1104 12:10:40.878728   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.878739   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:40.878746   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:40.878804   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:40.913888   86402 cri.go:89] found id: ""
	I1104 12:10:40.913913   86402 logs.go:282] 0 containers: []
	W1104 12:10:40.913921   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:40.913929   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:40.913939   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:40.966854   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:40.966892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:40.980483   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:40.980510   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:41.046059   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:41.046085   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:41.046100   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:41.129746   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:41.129779   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:40.707964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.207804   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.057019   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.555947   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:42.850057   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:44.851467   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:43.667029   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:43.680024   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:43.680092   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:43.714185   86402 cri.go:89] found id: ""
	I1104 12:10:43.714218   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.714227   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:43.714235   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:43.714294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:43.749493   86402 cri.go:89] found id: ""
	I1104 12:10:43.749515   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.749523   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:43.749529   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:43.749588   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:43.785400   86402 cri.go:89] found id: ""
	I1104 12:10:43.785426   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.785437   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:43.785444   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:43.785507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:43.818465   86402 cri.go:89] found id: ""
	I1104 12:10:43.818505   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.818517   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:43.818524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:43.818573   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:43.850232   86402 cri.go:89] found id: ""
	I1104 12:10:43.850262   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.850272   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:43.850279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:43.850337   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:43.882806   86402 cri.go:89] found id: ""
	I1104 12:10:43.882840   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.882851   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:43.882859   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:43.882920   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:43.919449   86402 cri.go:89] found id: ""
	I1104 12:10:43.919476   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.919486   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:43.919493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:43.919556   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:43.953761   86402 cri.go:89] found id: ""
	I1104 12:10:43.953791   86402 logs.go:282] 0 containers: []
	W1104 12:10:43.953801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:43.953812   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:43.953825   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:44.005559   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:44.005594   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:44.019431   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:44.019456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:44.094436   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:44.094457   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:44.094470   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:44.174026   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:44.174061   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:45.707449   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:47.709901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.557050   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:48.557552   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.851720   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:49.350269   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:46.712021   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:46.724258   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:46.724318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:46.754472   86402 cri.go:89] found id: ""
	I1104 12:10:46.754501   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.754510   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:46.754515   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:46.754563   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:46.790184   86402 cri.go:89] found id: ""
	I1104 12:10:46.790209   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.790219   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:46.790226   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:46.790284   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:46.824840   86402 cri.go:89] found id: ""
	I1104 12:10:46.824865   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.824875   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:46.824882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:46.824952   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:46.857295   86402 cri.go:89] found id: ""
	I1104 12:10:46.857329   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.857360   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:46.857369   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:46.857430   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:46.889540   86402 cri.go:89] found id: ""
	I1104 12:10:46.889571   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.889582   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:46.889588   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:46.889652   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:46.930165   86402 cri.go:89] found id: ""
	I1104 12:10:46.930195   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.930204   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:46.930210   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:46.930266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:46.965964   86402 cri.go:89] found id: ""
	I1104 12:10:46.965994   86402 logs.go:282] 0 containers: []
	W1104 12:10:46.966006   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:46.966013   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:46.966060   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:47.002700   86402 cri.go:89] found id: ""
	I1104 12:10:47.002732   86402 logs.go:282] 0 containers: []
	W1104 12:10:47.002741   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:47.002749   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:47.002760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:47.056362   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:47.056392   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:47.070447   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:47.070472   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:47.143207   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:47.143240   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:47.143256   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:47.223985   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:47.224015   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:49.765870   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:49.778288   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:49.778352   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:49.812012   86402 cri.go:89] found id: ""
	I1104 12:10:49.812044   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.812054   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:49.812064   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:49.812115   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:49.847260   86402 cri.go:89] found id: ""
	I1104 12:10:49.847290   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.847301   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:49.847308   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:49.847361   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:49.877397   86402 cri.go:89] found id: ""
	I1104 12:10:49.877419   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.877427   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:49.877432   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:49.877486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:49.912453   86402 cri.go:89] found id: ""
	I1104 12:10:49.912484   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.912499   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:49.912506   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:49.912572   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:49.948374   86402 cri.go:89] found id: ""
	I1104 12:10:49.948404   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.948416   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:49.948422   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:49.948488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:49.982190   86402 cri.go:89] found id: ""
	I1104 12:10:49.982216   86402 logs.go:282] 0 containers: []
	W1104 12:10:49.982228   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:49.982236   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:49.982294   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:50.014396   86402 cri.go:89] found id: ""
	I1104 12:10:50.014426   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.014437   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:50.014445   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:50.014507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:50.051770   86402 cri.go:89] found id: ""
	I1104 12:10:50.051793   86402 logs.go:282] 0 containers: []
	W1104 12:10:50.051801   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:50.051809   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:50.051820   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:50.116158   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:50.116185   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:50.116202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:50.194382   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:50.194431   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:50.235957   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:50.235983   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:50.290720   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:50.290750   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:50.207837   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.207972   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.208026   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.055965   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:53.056014   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:55.056318   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:51.850513   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:54.351193   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:52.805144   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:52.817686   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:52.817753   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:52.852470   86402 cri.go:89] found id: ""
	I1104 12:10:52.852492   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.852546   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:52.852559   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:52.852603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:52.889682   86402 cri.go:89] found id: ""
	I1104 12:10:52.889705   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.889714   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:52.889720   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:52.889773   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:52.924490   86402 cri.go:89] found id: ""
	I1104 12:10:52.924525   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.924537   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:52.924544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:52.924604   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:52.957055   86402 cri.go:89] found id: ""
	I1104 12:10:52.957085   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.957094   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:52.957099   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:52.957143   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:52.993379   86402 cri.go:89] found id: ""
	I1104 12:10:52.993411   86402 logs.go:282] 0 containers: []
	W1104 12:10:52.993423   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:52.993430   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:52.993493   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:53.027365   86402 cri.go:89] found id: ""
	I1104 12:10:53.027398   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.027407   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:53.027412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:53.027488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:53.061048   86402 cri.go:89] found id: ""
	I1104 12:10:53.061074   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.061082   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:53.061089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:53.061163   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:53.101867   86402 cri.go:89] found id: ""
	I1104 12:10:53.101894   86402 logs.go:282] 0 containers: []
	W1104 12:10:53.101904   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:53.101915   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:53.101927   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:53.152314   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:53.152351   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:53.165630   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:53.165657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:53.239717   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:53.239739   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:53.239753   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:53.318140   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:53.318186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:55.857443   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:55.869524   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:55.869608   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:55.900719   86402 cri.go:89] found id: ""
	I1104 12:10:55.900743   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.900753   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:55.900761   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:55.900821   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:55.932699   86402 cri.go:89] found id: ""
	I1104 12:10:55.932724   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.932734   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:55.932741   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:55.932798   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:55.964729   86402 cri.go:89] found id: ""
	I1104 12:10:55.964758   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.964767   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:55.964775   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:55.964823   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:55.997870   86402 cri.go:89] found id: ""
	I1104 12:10:55.997897   86402 logs.go:282] 0 containers: []
	W1104 12:10:55.997907   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:55.997915   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:55.997977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:56.031707   86402 cri.go:89] found id: ""
	I1104 12:10:56.031736   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.031744   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:56.031749   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:56.031805   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:56.070839   86402 cri.go:89] found id: ""
	I1104 12:10:56.070863   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.070871   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:56.070877   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:56.070922   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:56.109364   86402 cri.go:89] found id: ""
	I1104 12:10:56.109393   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.109404   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:56.109412   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:56.109474   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:56.143369   86402 cri.go:89] found id: ""
	I1104 12:10:56.143402   86402 logs.go:282] 0 containers: []
	W1104 12:10:56.143414   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:56.143424   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:56.143437   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:56.156924   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:56.156952   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:56.223624   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:56.223647   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:56.223659   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:56.302040   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:56.302082   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:10:56.343102   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:56.343150   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:56.209085   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.712250   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:57.056463   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:59.555744   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:56.850242   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.850955   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:10:58.896551   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:10:58.909034   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:10:58.909110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:10:58.944520   86402 cri.go:89] found id: ""
	I1104 12:10:58.944550   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.944559   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:10:58.944565   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:10:58.944612   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:10:58.980137   86402 cri.go:89] found id: ""
	I1104 12:10:58.980167   86402 logs.go:282] 0 containers: []
	W1104 12:10:58.980176   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:10:58.980181   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:10:58.980231   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:10:59.014505   86402 cri.go:89] found id: ""
	I1104 12:10:59.014536   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.014545   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:10:59.014551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:10:59.014602   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:10:59.050616   86402 cri.go:89] found id: ""
	I1104 12:10:59.050642   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.050652   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:10:59.050659   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:10:59.050718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:10:59.084328   86402 cri.go:89] found id: ""
	I1104 12:10:59.084358   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.084369   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:10:59.084376   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:10:59.084449   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:10:59.116607   86402 cri.go:89] found id: ""
	I1104 12:10:59.116633   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.116642   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:10:59.116649   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:10:59.116711   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:10:59.149727   86402 cri.go:89] found id: ""
	I1104 12:10:59.149754   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.149765   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:10:59.149773   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:10:59.149832   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:10:59.182992   86402 cri.go:89] found id: ""
	I1104 12:10:59.183023   86402 logs.go:282] 0 containers: []
	W1104 12:10:59.183035   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:10:59.183045   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:10:59.183059   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:10:59.234826   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:10:59.234862   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:10:59.248401   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:10:59.248427   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:10:59.317143   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:10:59.317171   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:10:59.317186   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:10:59.397294   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:10:59.397336   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:01.208022   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.707297   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.556680   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:04.055902   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.350865   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:03.850510   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:01.933617   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:01.946458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:01.946537   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:01.981652   86402 cri.go:89] found id: ""
	I1104 12:11:01.981682   86402 logs.go:282] 0 containers: []
	W1104 12:11:01.981693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:01.981701   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:01.981757   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:02.014245   86402 cri.go:89] found id: ""
	I1104 12:11:02.014273   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.014282   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:02.014287   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:02.014350   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:02.047386   86402 cri.go:89] found id: ""
	I1104 12:11:02.047409   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.047420   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:02.047427   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:02.047488   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:02.086427   86402 cri.go:89] found id: ""
	I1104 12:11:02.086464   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.086475   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:02.086483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:02.086544   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:02.120219   86402 cri.go:89] found id: ""
	I1104 12:11:02.120246   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.120255   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:02.120260   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:02.120318   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:02.153832   86402 cri.go:89] found id: ""
	I1104 12:11:02.153864   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.153876   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:02.153884   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:02.153950   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:02.186237   86402 cri.go:89] found id: ""
	I1104 12:11:02.186266   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.186278   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:02.186285   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:02.186351   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:02.219238   86402 cri.go:89] found id: ""
	I1104 12:11:02.219269   86402 logs.go:282] 0 containers: []
	W1104 12:11:02.219280   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:02.219290   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:02.219301   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:02.301062   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:02.301099   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:02.358585   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:02.358617   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:02.414153   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:02.414200   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:02.428429   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:02.428456   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:02.497040   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:04.998089   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:05.010890   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:05.010947   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:05.046483   86402 cri.go:89] found id: ""
	I1104 12:11:05.046513   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.046523   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:05.046534   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:05.046594   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:05.079487   86402 cri.go:89] found id: ""
	I1104 12:11:05.079516   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.079527   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:05.079535   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:05.079595   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:05.110968   86402 cri.go:89] found id: ""
	I1104 12:11:05.110997   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.111004   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:05.111010   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:05.111057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:05.143372   86402 cri.go:89] found id: ""
	I1104 12:11:05.143398   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.143408   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:05.143415   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:05.143484   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:05.174691   86402 cri.go:89] found id: ""
	I1104 12:11:05.174717   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.174730   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:05.174737   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:05.174802   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:05.210005   86402 cri.go:89] found id: ""
	I1104 12:11:05.210025   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.210033   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:05.210041   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:05.210085   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:05.244874   86402 cri.go:89] found id: ""
	I1104 12:11:05.244899   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.244908   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:05.244913   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:05.244956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:05.276517   86402 cri.go:89] found id: ""
	I1104 12:11:05.276547   86402 logs.go:282] 0 containers: []
	W1104 12:11:05.276557   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:05.276568   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:05.276581   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:05.354057   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:05.354087   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:05.390848   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:05.390887   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:05.442659   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:05.442692   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:05.456290   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:05.456315   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:05.530310   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:06.207301   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.208333   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.056314   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.556910   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:06.350241   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.350774   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:10.351274   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:08.030545   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:08.043598   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:08.043654   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:08.081604   86402 cri.go:89] found id: ""
	I1104 12:11:08.081634   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.081644   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:08.081652   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:08.081712   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:08.135357   86402 cri.go:89] found id: ""
	I1104 12:11:08.135388   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.135398   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:08.135405   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:08.135470   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:08.173275   86402 cri.go:89] found id: ""
	I1104 12:11:08.173298   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.173306   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:08.173311   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:08.173371   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:08.213415   86402 cri.go:89] found id: ""
	I1104 12:11:08.213439   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.213448   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:08.213454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:08.213507   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:08.244759   86402 cri.go:89] found id: ""
	I1104 12:11:08.244791   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.244802   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:08.244809   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:08.244870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:08.276643   86402 cri.go:89] found id: ""
	I1104 12:11:08.276666   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.276675   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:08.276682   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:08.276751   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:08.308425   86402 cri.go:89] found id: ""
	I1104 12:11:08.308451   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.308462   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:08.308469   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:08.308527   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:08.340645   86402 cri.go:89] found id: ""
	I1104 12:11:08.340675   86402 logs.go:282] 0 containers: []
	W1104 12:11:08.340687   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:08.340698   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:08.340712   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:08.413171   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:08.413196   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:08.413214   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:08.496208   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:08.496246   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:08.534527   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:08.534560   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:08.583515   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:08.583550   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:11.099000   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:11.112158   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:11.112236   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:11.145718   86402 cri.go:89] found id: ""
	I1104 12:11:11.145748   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.145758   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:11.145765   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:11.145958   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:11.177270   86402 cri.go:89] found id: ""
	I1104 12:11:11.177301   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.177317   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:11.177325   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:11.177396   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:11.209696   86402 cri.go:89] found id: ""
	I1104 12:11:11.209722   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.209737   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:11.209742   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:11.209789   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:11.244034   86402 cri.go:89] found id: ""
	I1104 12:11:11.244061   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.244069   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:11.244078   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:11.244135   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:11.276437   86402 cri.go:89] found id: ""
	I1104 12:11:11.276462   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.276470   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:11.276476   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:11.276530   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:11.308954   86402 cri.go:89] found id: ""
	I1104 12:11:11.308980   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.308988   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:11.308994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:11.309057   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:11.342175   86402 cri.go:89] found id: ""
	I1104 12:11:11.342199   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.342207   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:11.342211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:11.342266   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:11.374810   86402 cri.go:89] found id: ""
	I1104 12:11:11.374839   86402 logs.go:282] 0 containers: []
	W1104 12:11:11.374851   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:11.374860   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:11.374875   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:11.443638   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:11.443667   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:11.443681   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:11.526996   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:11.527031   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:11.568297   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:11.568325   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:11.616229   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:11.616264   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:10.707934   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.708053   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:11.055469   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:13.055645   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.057348   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:12.851253   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:15.350857   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:14.130707   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:14.143045   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:14.143116   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:14.185422   86402 cri.go:89] found id: ""
	I1104 12:11:14.185461   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.185471   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:14.185477   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:14.185525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:14.219890   86402 cri.go:89] found id: ""
	I1104 12:11:14.219918   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.219928   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:14.219938   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:14.219985   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:14.253256   86402 cri.go:89] found id: ""
	I1104 12:11:14.253286   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.253296   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:14.253304   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:14.253364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:14.286228   86402 cri.go:89] found id: ""
	I1104 12:11:14.286259   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.286271   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:14.286279   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:14.286342   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:14.317065   86402 cri.go:89] found id: ""
	I1104 12:11:14.317091   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.317101   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:14.317106   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:14.317168   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:14.348540   86402 cri.go:89] found id: ""
	I1104 12:11:14.348575   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.348583   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:14.348589   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:14.348647   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:14.380824   86402 cri.go:89] found id: ""
	I1104 12:11:14.380849   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.380858   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:14.380863   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:14.380924   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:14.413757   86402 cri.go:89] found id: ""
	I1104 12:11:14.413785   86402 logs.go:282] 0 containers: []
	W1104 12:11:14.413796   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:14.413806   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:14.413822   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:14.479311   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:14.479336   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:14.479349   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:14.572923   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:14.572959   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:14.620277   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:14.620359   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:14.674276   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:14.674310   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:15.208704   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.708523   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.555941   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.556233   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.351751   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:19.851087   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:17.187062   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:17.200179   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:17.200260   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:17.232208   86402 cri.go:89] found id: ""
	I1104 12:11:17.232231   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.232238   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:17.232244   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:17.232298   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:17.266224   86402 cri.go:89] found id: ""
	I1104 12:11:17.266248   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.266257   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:17.266262   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:17.266320   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:17.301909   86402 cri.go:89] found id: ""
	I1104 12:11:17.301940   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.301948   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:17.301953   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:17.302005   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:17.339493   86402 cri.go:89] found id: ""
	I1104 12:11:17.339517   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.339530   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:17.339537   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:17.339600   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:17.373879   86402 cri.go:89] found id: ""
	I1104 12:11:17.373927   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.373938   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:17.373945   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:17.373996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:17.405533   86402 cri.go:89] found id: ""
	I1104 12:11:17.405562   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.405573   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:17.405583   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:17.405645   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:17.439421   86402 cri.go:89] found id: ""
	I1104 12:11:17.439451   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.439460   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:17.439468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:17.439532   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:17.474573   86402 cri.go:89] found id: ""
	I1104 12:11:17.474602   86402 logs.go:282] 0 containers: []
	W1104 12:11:17.474613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:17.474623   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:17.474636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:17.524497   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:17.524536   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:17.538421   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:17.538460   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:17.607299   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:17.607323   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:17.607337   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:17.684181   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:17.684224   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.223600   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:20.237793   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:20.237865   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:20.279656   86402 cri.go:89] found id: ""
	I1104 12:11:20.279682   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.279693   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:20.279700   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:20.279767   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:20.337980   86402 cri.go:89] found id: ""
	I1104 12:11:20.338009   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.338020   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:20.338027   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:20.338087   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:20.383183   86402 cri.go:89] found id: ""
	I1104 12:11:20.383217   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.383226   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:20.383231   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:20.383282   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:20.416470   86402 cri.go:89] found id: ""
	I1104 12:11:20.416495   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.416505   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:20.416512   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:20.416570   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:20.451968   86402 cri.go:89] found id: ""
	I1104 12:11:20.452000   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.452011   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:20.452017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:20.452074   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:20.484800   86402 cri.go:89] found id: ""
	I1104 12:11:20.484823   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.484831   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:20.484837   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:20.484893   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:20.516263   86402 cri.go:89] found id: ""
	I1104 12:11:20.516292   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.516300   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:20.516306   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:20.516364   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:20.548616   86402 cri.go:89] found id: ""
	I1104 12:11:20.548640   86402 logs.go:282] 0 containers: []
	W1104 12:11:20.548651   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:20.548661   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:20.548674   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:20.599338   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:20.599368   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:20.613116   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:20.613148   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:20.678898   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:20.678924   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:20.678936   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:20.757570   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:20.757606   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:20.206649   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.207379   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.207579   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.056670   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.555101   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:22.350891   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:24.351318   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:23.293912   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:23.307037   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:23.307110   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:23.341161   86402 cri.go:89] found id: ""
	I1104 12:11:23.341186   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.341195   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:23.341200   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:23.341277   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:23.373462   86402 cri.go:89] found id: ""
	I1104 12:11:23.373491   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.373503   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:23.373510   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:23.373568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:23.404439   86402 cri.go:89] found id: ""
	I1104 12:11:23.404471   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.404482   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:23.404489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:23.404548   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:23.435224   86402 cri.go:89] found id: ""
	I1104 12:11:23.435256   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.435267   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:23.435274   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:23.435336   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:23.472593   86402 cri.go:89] found id: ""
	I1104 12:11:23.472622   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.472633   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:23.472641   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:23.472693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:23.503413   86402 cri.go:89] found id: ""
	I1104 12:11:23.503438   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.503447   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:23.503454   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:23.503516   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:23.537582   86402 cri.go:89] found id: ""
	I1104 12:11:23.537610   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.537621   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:23.537628   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:23.537689   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:23.573799   86402 cri.go:89] found id: ""
	I1104 12:11:23.573824   86402 logs.go:282] 0 containers: []
	W1104 12:11:23.573831   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:23.573838   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:23.573851   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:23.649239   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:23.649273   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:23.686518   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:23.686548   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:23.738955   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:23.738987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:23.751909   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:23.751935   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:23.827244   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.327902   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:26.339708   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:26.339784   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:26.369615   86402 cri.go:89] found id: ""
	I1104 12:11:26.369644   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.369653   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:26.369659   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:26.369715   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:26.402027   86402 cri.go:89] found id: ""
	I1104 12:11:26.402056   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.402065   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:26.402070   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:26.402123   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:26.433483   86402 cri.go:89] found id: ""
	I1104 12:11:26.433512   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.433523   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:26.433529   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:26.433637   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:26.466403   86402 cri.go:89] found id: ""
	I1104 12:11:26.466442   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.466453   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:26.466468   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:26.466524   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:26.499818   86402 cri.go:89] found id: ""
	I1104 12:11:26.499853   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.499864   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:26.499871   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:26.499930   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:26.537782   86402 cri.go:89] found id: ""
	I1104 12:11:26.537809   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.537822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:26.537830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:26.537890   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:26.574091   86402 cri.go:89] found id: ""
	I1104 12:11:26.574120   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.574131   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:26.574138   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:26.574199   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:26.607554   86402 cri.go:89] found id: ""
	I1104 12:11:26.607584   86402 logs.go:282] 0 containers: []
	W1104 12:11:26.607596   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:26.607606   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:26.607620   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:26.657405   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:26.657443   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:26.670022   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:26.670046   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:11:26.707958   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.207380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.556568   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:28.557276   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:26.852761   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:29.351303   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	W1104 12:11:26.736238   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:26.736266   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:26.736278   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:26.816277   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:26.816309   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:29.357639   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:29.371116   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:29.371204   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:29.405569   86402 cri.go:89] found id: ""
	I1104 12:11:29.405595   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.405604   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:29.405611   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:29.405668   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:29.435669   86402 cri.go:89] found id: ""
	I1104 12:11:29.435697   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.435709   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:29.435716   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:29.435781   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:29.476208   86402 cri.go:89] found id: ""
	I1104 12:11:29.476236   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.476245   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:29.476251   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:29.476305   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:29.511446   86402 cri.go:89] found id: ""
	I1104 12:11:29.511474   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.511483   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:29.511489   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:29.511541   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:29.543714   86402 cri.go:89] found id: ""
	I1104 12:11:29.543742   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.543754   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:29.543761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:29.543840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:29.577429   86402 cri.go:89] found id: ""
	I1104 12:11:29.577456   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.577466   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:29.577473   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:29.577534   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:29.608430   86402 cri.go:89] found id: ""
	I1104 12:11:29.608457   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.608475   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:29.608483   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:29.608539   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:29.640029   86402 cri.go:89] found id: ""
	I1104 12:11:29.640057   86402 logs.go:282] 0 containers: []
	W1104 12:11:29.640068   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:29.640078   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:29.640092   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:29.691170   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:29.691202   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:29.704949   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:29.704987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:29.766856   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:29.766884   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:29.766898   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:29.847487   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:29.847525   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:31.208725   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.709593   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:30.557500   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:33.056569   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:31.851101   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:34.350356   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:32.382925   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:32.395889   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:32.395943   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:32.428711   86402 cri.go:89] found id: ""
	I1104 12:11:32.428736   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.428749   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:32.428755   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:32.428810   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:32.463269   86402 cri.go:89] found id: ""
	I1104 12:11:32.463295   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.463307   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:32.463313   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:32.463372   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:32.496098   86402 cri.go:89] found id: ""
	I1104 12:11:32.496125   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.496135   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:32.496142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:32.496213   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:32.528729   86402 cri.go:89] found id: ""
	I1104 12:11:32.528760   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.528771   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:32.528778   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:32.528860   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:32.567290   86402 cri.go:89] found id: ""
	I1104 12:11:32.567321   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.567332   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:32.567338   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:32.567397   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:32.608932   86402 cri.go:89] found id: ""
	I1104 12:11:32.608962   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.608973   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:32.608980   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:32.609037   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:32.641128   86402 cri.go:89] found id: ""
	I1104 12:11:32.641155   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.641164   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:32.641171   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:32.641239   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:32.675651   86402 cri.go:89] found id: ""
	I1104 12:11:32.675682   86402 logs.go:282] 0 containers: []
	W1104 12:11:32.675694   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:32.675704   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:32.675719   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:32.742369   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:32.742406   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:32.742419   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:32.823371   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:32.823412   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:32.862243   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:32.862270   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:32.910961   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:32.910987   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.425742   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:35.438553   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:35.438615   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:35.475160   86402 cri.go:89] found id: ""
	I1104 12:11:35.475189   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.475201   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:35.475209   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:35.475267   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:35.517193   86402 cri.go:89] found id: ""
	I1104 12:11:35.517239   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.517252   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:35.517260   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:35.517329   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:35.552941   86402 cri.go:89] found id: ""
	I1104 12:11:35.552967   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.552978   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:35.552985   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:35.553056   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:35.589960   86402 cri.go:89] found id: ""
	I1104 12:11:35.589983   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.589994   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:35.590001   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:35.590063   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:35.624546   86402 cri.go:89] found id: ""
	I1104 12:11:35.624575   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.624587   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:35.624595   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:35.624655   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:35.657855   86402 cri.go:89] found id: ""
	I1104 12:11:35.657885   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.657896   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:35.657903   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:35.657957   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:35.691465   86402 cri.go:89] found id: ""
	I1104 12:11:35.691498   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.691509   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:35.691516   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:35.691587   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:35.727520   86402 cri.go:89] found id: ""
	I1104 12:11:35.727548   86402 logs.go:282] 0 containers: []
	W1104 12:11:35.727558   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:35.727569   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:35.727584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:35.777876   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:35.777912   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:35.790790   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:35.790817   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:35.856780   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:35.856805   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:35.856819   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:35.936769   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:35.936812   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:36.207096   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.707776   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:35.556694   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.055778   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:36.850946   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:39.350058   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:38.474827   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:38.488151   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:38.488221   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:38.523010   86402 cri.go:89] found id: ""
	I1104 12:11:38.523042   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.523053   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:38.523061   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:38.523117   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:38.558065   86402 cri.go:89] found id: ""
	I1104 12:11:38.558093   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.558102   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:38.558107   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:38.558153   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:38.590676   86402 cri.go:89] found id: ""
	I1104 12:11:38.590704   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.590715   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:38.590723   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:38.590780   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:38.623762   86402 cri.go:89] found id: ""
	I1104 12:11:38.623793   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.623804   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:38.623811   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:38.623870   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:38.655918   86402 cri.go:89] found id: ""
	I1104 12:11:38.655947   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.655958   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:38.655966   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:38.656028   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:38.691200   86402 cri.go:89] found id: ""
	I1104 12:11:38.691228   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.691238   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:38.691245   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:38.691302   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:38.724725   86402 cri.go:89] found id: ""
	I1104 12:11:38.724748   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.724756   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:38.724761   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:38.724819   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:38.756333   86402 cri.go:89] found id: ""
	I1104 12:11:38.756360   86402 logs.go:282] 0 containers: []
	W1104 12:11:38.756370   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:38.756381   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:38.756395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:38.807722   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:38.807756   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:38.821055   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:38.821079   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:38.886629   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:38.886656   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:38.886671   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:38.960958   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:38.960999   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:41.503471   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:41.515994   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:41.516065   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:41.549936   86402 cri.go:89] found id: ""
	I1104 12:11:41.549960   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.549968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:41.549975   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:41.550033   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:41.584565   86402 cri.go:89] found id: ""
	I1104 12:11:41.584590   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.584602   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:41.584610   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:41.584660   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:41.616427   86402 cri.go:89] found id: ""
	I1104 12:11:41.616450   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.616458   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:41.616463   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:41.616510   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:41.650835   86402 cri.go:89] found id: ""
	I1104 12:11:41.650864   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.650875   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:41.650882   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:41.650946   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:40.707926   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.207969   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:40.555616   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:42.555839   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:44.556749   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.351131   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:43.851925   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:41.685899   86402 cri.go:89] found id: ""
	I1104 12:11:41.685921   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.685928   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:41.685934   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:41.685979   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:41.718730   86402 cri.go:89] found id: ""
	I1104 12:11:41.718757   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.718773   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:41.718782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:41.718837   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:41.748843   86402 cri.go:89] found id: ""
	I1104 12:11:41.748875   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.748887   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:41.748895   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:41.748963   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:41.780225   86402 cri.go:89] found id: ""
	I1104 12:11:41.780251   86402 logs.go:282] 0 containers: []
	W1104 12:11:41.780260   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:41.780268   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:41.780285   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:41.830864   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:41.830893   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:41.844252   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:41.844279   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:41.908514   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:41.908542   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:41.908554   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:41.988545   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:41.988582   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:44.527641   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:44.540026   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:44.540108   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:44.574530   86402 cri.go:89] found id: ""
	I1104 12:11:44.574559   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.574570   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:44.574577   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:44.574638   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:44.606073   86402 cri.go:89] found id: ""
	I1104 12:11:44.606103   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.606114   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:44.606121   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:44.606185   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:44.639750   86402 cri.go:89] found id: ""
	I1104 12:11:44.639775   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.639784   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:44.639792   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:44.639850   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:44.673528   86402 cri.go:89] found id: ""
	I1104 12:11:44.673557   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.673565   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:44.673573   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:44.673625   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:44.705928   86402 cri.go:89] found id: ""
	I1104 12:11:44.705956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.705966   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:44.705973   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:44.706032   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:44.736779   86402 cri.go:89] found id: ""
	I1104 12:11:44.736811   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.736822   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:44.736830   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:44.736886   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:44.769929   86402 cri.go:89] found id: ""
	I1104 12:11:44.769956   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.769964   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:44.769970   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:44.770015   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:44.800818   86402 cri.go:89] found id: ""
	I1104 12:11:44.800846   86402 logs.go:282] 0 containers: []
	W1104 12:11:44.800855   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:44.800863   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:44.800873   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:44.853610   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:44.853641   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:44.866656   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:44.866683   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:44.936386   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:44.936412   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:44.936425   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:45.011789   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:45.011823   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:45.707030   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.707464   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.711329   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.557112   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:49.055647   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:46.351055   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:48.850134   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:50.851867   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:47.548672   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:47.563082   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:47.563157   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:47.598722   86402 cri.go:89] found id: ""
	I1104 12:11:47.598748   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.598756   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:47.598762   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:47.598809   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:47.633376   86402 cri.go:89] found id: ""
	I1104 12:11:47.633412   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.633421   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:47.633428   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:47.633486   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:47.666059   86402 cri.go:89] found id: ""
	I1104 12:11:47.666087   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.666095   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:47.666101   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:47.666147   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:47.700659   86402 cri.go:89] found id: ""
	I1104 12:11:47.700690   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.700704   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:47.700711   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:47.700771   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:47.732901   86402 cri.go:89] found id: ""
	I1104 12:11:47.732927   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.732934   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:47.732940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:47.732984   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:47.765371   86402 cri.go:89] found id: ""
	I1104 12:11:47.765398   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.765418   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:47.765425   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:47.765487   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:47.797043   86402 cri.go:89] found id: ""
	I1104 12:11:47.797077   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.797089   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:47.797096   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:47.797159   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:47.828140   86402 cri.go:89] found id: ""
	I1104 12:11:47.828172   86402 logs.go:282] 0 containers: []
	W1104 12:11:47.828184   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:47.828194   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:47.828208   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:47.911398   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:47.911434   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:47.948042   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:47.948071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:47.999603   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:47.999638   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:48.013818   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:48.013856   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:48.082679   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.583325   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:50.595272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:50.595346   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:50.630857   86402 cri.go:89] found id: ""
	I1104 12:11:50.630883   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.630892   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:50.630899   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:50.630965   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:50.663025   86402 cri.go:89] found id: ""
	I1104 12:11:50.663049   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.663058   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:50.663063   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:50.663109   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:50.695371   86402 cri.go:89] found id: ""
	I1104 12:11:50.695402   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.695413   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:50.695421   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:50.695480   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:50.728805   86402 cri.go:89] found id: ""
	I1104 12:11:50.728827   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.728836   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:50.728841   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:50.728902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:50.762837   86402 cri.go:89] found id: ""
	I1104 12:11:50.762868   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.762878   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:50.762885   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:50.762941   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:50.802531   86402 cri.go:89] found id: ""
	I1104 12:11:50.802556   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.802564   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:50.802569   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:50.802613   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:50.835124   86402 cri.go:89] found id: ""
	I1104 12:11:50.835161   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.835173   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:50.835180   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:50.835234   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:50.869265   86402 cri.go:89] found id: ""
	I1104 12:11:50.869295   86402 logs.go:282] 0 containers: []
	W1104 12:11:50.869308   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:50.869318   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:50.869330   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:50.919371   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:50.919405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:50.932165   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:50.932195   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:50.993935   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:50.993959   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:50.993972   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:51.071816   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:51.071848   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:52.208101   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:54.707467   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:51.056129   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.057025   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.349902   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.350302   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:53.608347   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:53.620842   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:53.620902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:53.652870   86402 cri.go:89] found id: ""
	I1104 12:11:53.652896   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.652909   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:53.652917   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:53.652980   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:53.684842   86402 cri.go:89] found id: ""
	I1104 12:11:53.684878   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.684889   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:53.684897   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:53.684956   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:53.722505   86402 cri.go:89] found id: ""
	I1104 12:11:53.722531   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.722539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:53.722544   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:53.722603   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:53.753831   86402 cri.go:89] found id: ""
	I1104 12:11:53.753858   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.753866   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:53.753872   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:53.753918   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:53.786112   86402 cri.go:89] found id: ""
	I1104 12:11:53.786139   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.786150   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:53.786157   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:53.786218   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:53.820446   86402 cri.go:89] found id: ""
	I1104 12:11:53.820472   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.820487   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:53.820493   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:53.820552   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:53.855631   86402 cri.go:89] found id: ""
	I1104 12:11:53.855655   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.855665   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:53.855673   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:53.855727   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:53.887953   86402 cri.go:89] found id: ""
	I1104 12:11:53.887983   86402 logs.go:282] 0 containers: []
	W1104 12:11:53.887994   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:53.888004   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:53.888023   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:53.954408   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:53.954430   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:53.954442   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:54.028549   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:54.028584   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:54.070869   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:54.070895   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:54.123676   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:54.123715   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:56.639480   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:56.652651   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:56.652709   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:56.708211   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:55.555992   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:58.056271   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:57.350474   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:59.850830   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:11:56.689397   86402 cri.go:89] found id: ""
	I1104 12:11:56.689425   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.689443   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:56.689452   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:56.689517   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:56.725197   86402 cri.go:89] found id: ""
	I1104 12:11:56.725234   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.725246   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:56.725254   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:56.725308   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:56.759043   86402 cri.go:89] found id: ""
	I1104 12:11:56.759073   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.759084   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:56.759090   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:56.759141   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:56.792268   86402 cri.go:89] found id: ""
	I1104 12:11:56.792296   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.792307   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:56.792314   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:56.792375   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:56.823668   86402 cri.go:89] found id: ""
	I1104 12:11:56.823692   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.823702   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:56.823709   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:56.823769   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:56.861812   86402 cri.go:89] found id: ""
	I1104 12:11:56.861837   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.861845   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:56.861851   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:56.861902   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:56.894037   86402 cri.go:89] found id: ""
	I1104 12:11:56.894067   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.894075   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:56.894080   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:56.894133   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:56.925603   86402 cri.go:89] found id: ""
	I1104 12:11:56.925634   86402 logs.go:282] 0 containers: []
	W1104 12:11:56.925646   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:56.925656   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:11:56.925669   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:11:56.961504   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:56.961530   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:11:57.012666   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:11:57.012700   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:11:57.025887   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:11:57.025921   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:11:57.097219   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:11:57.097257   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:11:57.097272   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:11:59.671179   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:11:59.684642   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:11:59.684718   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:11:59.721599   86402 cri.go:89] found id: ""
	I1104 12:11:59.721622   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.721631   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:11:59.721640   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:11:59.721693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:11:59.757423   86402 cri.go:89] found id: ""
	I1104 12:11:59.757453   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.757461   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:11:59.757466   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:11:59.757525   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:11:59.794036   86402 cri.go:89] found id: ""
	I1104 12:11:59.794071   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.794081   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:11:59.794089   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:11:59.794148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:11:59.830098   86402 cri.go:89] found id: ""
	I1104 12:11:59.830123   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.830134   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:11:59.830142   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:11:59.830207   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:11:59.867791   86402 cri.go:89] found id: ""
	I1104 12:11:59.867815   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.867823   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:11:59.867828   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:11:59.867879   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:11:59.903579   86402 cri.go:89] found id: ""
	I1104 12:11:59.903607   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.903614   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:11:59.903620   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:11:59.903667   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:11:59.940955   86402 cri.go:89] found id: ""
	I1104 12:11:59.940977   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.940984   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:11:59.940989   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:11:59.941034   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:11:59.977626   86402 cri.go:89] found id: ""
	I1104 12:11:59.977653   86402 logs.go:282] 0 containers: []
	W1104 12:11:59.977663   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:11:59.977674   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:11:59.977687   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:00.032280   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:00.032312   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:00.045965   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:00.045991   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:00.123578   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:00.123608   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:00.123625   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:00.208309   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:00.208340   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:01.707661   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.207140   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:00.555683   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:02.555797   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:04.556558   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851646   85759 pod_ready.go:103] pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:01.851680   85759 pod_ready.go:82] duration metric: took 4m0.007179751s for pod "metrics-server-6867b74b74-knfd4" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:01.851691   85759 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:01.851701   85759 pod_ready.go:39] duration metric: took 4m4.052369029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:01.851721   85759 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:01.851752   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:01.851805   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:01.891029   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:01.891056   85759 cri.go:89] found id: ""
	I1104 12:12:01.891066   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:01.891128   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.895134   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:01.895243   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:01.928058   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:01.928081   85759 cri.go:89] found id: ""
	I1104 12:12:01.928089   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:01.928134   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.931906   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:01.931974   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:01.972023   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:01.972052   85759 cri.go:89] found id: ""
	I1104 12:12:01.972062   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:01.972116   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:01.980811   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:01.980894   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.024013   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.024038   85759 cri.go:89] found id: ""
	I1104 12:12:02.024046   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:02.024108   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.028571   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.028641   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.063545   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:02.063570   85759 cri.go:89] found id: ""
	I1104 12:12:02.063580   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:02.063635   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.067582   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.067652   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.100954   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.100979   85759 cri.go:89] found id: ""
	I1104 12:12:02.100989   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:02.101038   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.105121   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.105182   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.137206   85759 cri.go:89] found id: ""
	I1104 12:12:02.137249   85759 logs.go:282] 0 containers: []
	W1104 12:12:02.137260   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.137268   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:02.137317   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:02.171499   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:02.171520   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.171526   85759 cri.go:89] found id: ""
	I1104 12:12:02.171535   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:02.171587   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.175646   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:02.179066   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:02.179084   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.249087   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:02.249126   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:02.262666   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:02.262692   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:02.316826   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:02.316856   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:02.351741   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:02.351766   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:02.400377   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:02.400409   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:02.448029   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:02.448059   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:02.975331   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:02.975367   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:03.026697   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.026739   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:03.140704   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:03.140753   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:03.192394   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:03.192427   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:03.236040   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:03.236071   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:03.275166   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:03.275194   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:05.813333   85759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.827697   85759 api_server.go:72] duration metric: took 4m15.741105379s to wait for apiserver process to appear ...
	I1104 12:12:05.827725   85759 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:05.827763   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.827826   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.869552   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:05.869580   85759 cri.go:89] found id: ""
	I1104 12:12:05.869590   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:05.869642   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.873890   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.873954   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.914131   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:05.914153   85759 cri.go:89] found id: ""
	I1104 12:12:05.914161   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:05.914216   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.920980   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.921042   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.960930   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:05.960953   85759 cri.go:89] found id: ""
	I1104 12:12:05.960962   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:05.961018   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:05.965591   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.965653   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:06.000500   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:06.000520   85759 cri.go:89] found id: ""
	I1104 12:12:06.000526   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:06.000576   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.004775   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:06.004835   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:06.042011   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:06.042032   85759 cri.go:89] found id: ""
	I1104 12:12:06.042041   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:06.042102   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.047885   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:06.047952   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.084318   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:06.084341   85759 cri.go:89] found id: ""
	I1104 12:12:06.084349   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:06.084410   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.088487   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.088564   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.127693   85759 cri.go:89] found id: ""
	I1104 12:12:06.127721   85759 logs.go:282] 0 containers: []
	W1104 12:12:06.127730   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.127736   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:06.127780   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:06.165173   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.165199   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.165206   85759 cri.go:89] found id: ""
	I1104 12:12:06.165215   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:06.165302   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.169479   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:06.173154   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.173177   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:02.746303   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:02.758892   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:02.758967   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:02.792775   86402 cri.go:89] found id: ""
	I1104 12:12:02.792803   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.792815   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:02.792822   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:02.792878   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:02.831073   86402 cri.go:89] found id: ""
	I1104 12:12:02.831097   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.831108   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:02.831115   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:02.831174   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:02.863530   86402 cri.go:89] found id: ""
	I1104 12:12:02.863557   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.863568   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:02.863574   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:02.863641   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:02.894894   86402 cri.go:89] found id: ""
	I1104 12:12:02.894924   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.894934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:02.894942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:02.894996   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:02.930052   86402 cri.go:89] found id: ""
	I1104 12:12:02.930081   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.930092   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:02.930100   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:02.930160   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:02.964503   86402 cri.go:89] found id: ""
	I1104 12:12:02.964532   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.964544   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:02.964551   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:02.964610   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:02.998065   86402 cri.go:89] found id: ""
	I1104 12:12:02.998088   86402 logs.go:282] 0 containers: []
	W1104 12:12:02.998096   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:02.998102   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:02.998148   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:03.033579   86402 cri.go:89] found id: ""
	I1104 12:12:03.033604   86402 logs.go:282] 0 containers: []
	W1104 12:12:03.033613   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:03.033621   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:03.033630   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:03.086215   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:03.086249   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:03.100100   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:03.100136   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:03.168116   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:03.168150   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:03.168165   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:03.253608   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:03.253642   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:05.792913   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:05.806494   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:05.806568   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:05.854379   86402 cri.go:89] found id: ""
	I1104 12:12:05.854406   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.854417   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:05.854425   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:05.854503   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:05.886144   86402 cri.go:89] found id: ""
	I1104 12:12:05.886169   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.886179   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:05.886186   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:05.886248   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:05.917462   86402 cri.go:89] found id: ""
	I1104 12:12:05.917482   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.917492   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:05.917499   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:05.917550   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:05.954065   86402 cri.go:89] found id: ""
	I1104 12:12:05.954099   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.954110   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:05.954120   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:05.954194   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:05.990935   86402 cri.go:89] found id: ""
	I1104 12:12:05.990966   86402 logs.go:282] 0 containers: []
	W1104 12:12:05.990977   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:05.990984   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:05.991050   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:06.032175   86402 cri.go:89] found id: ""
	I1104 12:12:06.032198   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.032206   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:06.032211   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:06.032269   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:06.069215   86402 cri.go:89] found id: ""
	I1104 12:12:06.069262   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.069275   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:06.069282   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:06.069340   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:06.103065   86402 cri.go:89] found id: ""
	I1104 12:12:06.103106   86402 logs.go:282] 0 containers: []
	W1104 12:12:06.103117   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:06.103127   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.103145   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:06.184111   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:06.184135   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.184149   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.272720   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.272760   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.315596   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:06.315636   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:06.376054   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.376110   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.214237   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:08.707098   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:07.056531   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:09.056763   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:06.252295   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:06.252326   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:06.302739   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:06.302769   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:06.361279   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:06.361307   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:06.811335   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:06.811380   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:06.851356   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:06.851387   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:06.888753   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:06.888789   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:06.922406   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:06.922438   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:06.935028   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:06.935057   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:07.033975   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:07.034019   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:07.068680   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:07.068706   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:07.105150   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:07.105182   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:07.139258   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:07.139290   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.695630   85759 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I1104 12:12:09.701156   85759 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I1104 12:12:09.702414   85759 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:09.702441   85759 api_server.go:131] duration metric: took 3.874707524s to wait for apiserver health ...
	I1104 12:12:09.702451   85759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:09.702475   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:09.702530   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:09.736867   85759 cri.go:89] found id: "6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:09.736891   85759 cri.go:89] found id: ""
	I1104 12:12:09.736901   85759 logs.go:282] 1 containers: [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28]
	I1104 12:12:09.736956   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.741108   85759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:09.741176   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:09.780460   85759 cri.go:89] found id: "5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:09.780483   85759 cri.go:89] found id: ""
	I1104 12:12:09.780490   85759 logs.go:282] 1 containers: [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06]
	I1104 12:12:09.780535   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.784698   85759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:09.784756   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:09.823042   85759 cri.go:89] found id: "d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:09.823059   85759 cri.go:89] found id: ""
	I1104 12:12:09.823068   85759 logs.go:282] 1 containers: [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27]
	I1104 12:12:09.823123   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.826750   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:09.826803   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.859148   85759 cri.go:89] found id: "a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:09.859175   85759 cri.go:89] found id: ""
	I1104 12:12:09.859185   85759 logs.go:282] 1 containers: [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f]
	I1104 12:12:09.859245   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.863676   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.863739   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.901737   85759 cri.go:89] found id: "512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:09.901766   85759 cri.go:89] found id: ""
	I1104 12:12:09.901783   85759 logs.go:282] 1 containers: [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0]
	I1104 12:12:09.901843   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.905931   85759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.905993   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.942617   85759 cri.go:89] found id: "5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:09.942637   85759 cri.go:89] found id: ""
	I1104 12:12:09.942644   85759 logs.go:282] 1 containers: [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b]
	I1104 12:12:09.942687   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:09.946420   85759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.946481   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.984891   85759 cri.go:89] found id: ""
	I1104 12:12:09.984921   85759 logs.go:282] 0 containers: []
	W1104 12:12:09.984932   85759 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.984939   85759 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:09.985000   85759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:10.018332   85759 cri.go:89] found id: "95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.018357   85759 cri.go:89] found id: "c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.018363   85759 cri.go:89] found id: ""
	I1104 12:12:10.018374   85759 logs.go:282] 2 containers: [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7]
	I1104 12:12:10.018434   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.022995   85759 ssh_runner.go:195] Run: which crictl
	I1104 12:12:10.026853   85759 logs.go:123] Gathering logs for etcd [5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06] ...
	I1104 12:12:10.026878   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b575c045ea6e47b4e5a1384b6b473570efefa6f645c9c75d1eedf037230bc06"
	I1104 12:12:10.083384   85759 logs.go:123] Gathering logs for kube-controller-manager [5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b] ...
	I1104 12:12:10.083421   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5751adaa2cf784f7619b281e11898b312ecc8186b36029e9e8a3b8e484cd703b"
	I1104 12:12:10.136576   85759 logs.go:123] Gathering logs for storage-provisioner [95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde] ...
	I1104 12:12:10.136608   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a9eb50a127a39e6276205bf440bcbc662ba25b24e029dc1a667f48f8481dde"
	I1104 12:12:10.182808   85759 logs.go:123] Gathering logs for storage-provisioner [c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7] ...
	I1104 12:12:10.182837   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7558f4e108716f1710271ea75be748ffd928b609788942a3658f0d2237ebcc7"
	I1104 12:12:10.217017   85759 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:10.217047   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:10.598972   85759 logs.go:123] Gathering logs for container status ...
	I1104 12:12:10.599010   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:10.638827   85759 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:10.638868   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:10.652880   85759 logs.go:123] Gathering logs for kube-apiserver [6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28] ...
	I1104 12:12:10.652923   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e7999c6e5a24f7d8706b6722e1fe66996ecec4550d4a901729cac1f3f108f28"
	I1104 12:12:10.700645   85759 logs.go:123] Gathering logs for coredns [d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27] ...
	I1104 12:12:10.700675   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1f0c1ed5e8911dccb619576be93e1623936a2b05ccc1e80d52ffe8ba1292d27"
	I1104 12:12:10.734860   85759 logs.go:123] Gathering logs for kube-scheduler [a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f] ...
	I1104 12:12:10.734890   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5a0cb5f09f9920c57a05bcc9c16cdcab6806c42458b625d4d075a9d1bc3f80f"
	I1104 12:12:10.774613   85759 logs.go:123] Gathering logs for kube-proxy [512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0] ...
	I1104 12:12:10.774647   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 512d8563ff2ef5d1f8021fa8815d00d2705c8df3b079a23ee6fc909af1c980f0"
	I1104 12:12:10.808375   85759 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:10.808403   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:10.876130   85759 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:10.876165   85759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:08.890463   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:08.904272   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:08.904354   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:08.935677   86402 cri.go:89] found id: ""
	I1104 12:12:08.935701   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.935710   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:08.935715   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:08.935761   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:08.966969   86402 cri.go:89] found id: ""
	I1104 12:12:08.966993   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.967004   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:08.967011   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:08.967072   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:08.998753   86402 cri.go:89] found id: ""
	I1104 12:12:08.998778   86402 logs.go:282] 0 containers: []
	W1104 12:12:08.998786   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:08.998790   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:08.998852   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:09.031901   86402 cri.go:89] found id: ""
	I1104 12:12:09.031925   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.031934   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:09.031940   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:09.032000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:09.071478   86402 cri.go:89] found id: ""
	I1104 12:12:09.071500   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.071508   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:09.071513   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:09.071564   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:09.107593   86402 cri.go:89] found id: ""
	I1104 12:12:09.107621   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.107629   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:09.107635   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:09.107693   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:09.140899   86402 cri.go:89] found id: ""
	I1104 12:12:09.140923   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.140934   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:09.140942   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:09.141000   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:09.174279   86402 cri.go:89] found id: ""
	I1104 12:12:09.174307   86402 logs.go:282] 0 containers: []
	W1104 12:12:09.174318   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:09.174330   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:09.174405   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:09.226340   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:09.226371   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:09.239573   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:09.239600   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:09.306180   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:09.306201   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:09.306212   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:09.385039   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:09.385072   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:13.475909   85759 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:13.475946   85759 system_pods.go:61] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.475954   85759 system_pods.go:61] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.475960   85759 system_pods.go:61] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.475965   85759 system_pods.go:61] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.475970   85759 system_pods.go:61] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.475975   85759 system_pods.go:61] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.475985   85759 system_pods.go:61] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.475994   85759 system_pods.go:61] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.476008   85759 system_pods.go:74] duration metric: took 3.773548162s to wait for pod list to return data ...
	I1104 12:12:13.476020   85759 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:13.478598   85759 default_sa.go:45] found service account: "default"
	I1104 12:12:13.478618   85759 default_sa.go:55] duration metric: took 2.591186ms for default service account to be created ...
	I1104 12:12:13.478628   85759 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:13.483285   85759 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:13.483308   85759 system_pods.go:89] "coredns-7c65d6cfc9-mf8xg" [c0162005-7971-4161-9575-9f36c13d54f2] Running
	I1104 12:12:13.483314   85759 system_pods.go:89] "etcd-embed-certs-325116" [4cfeeefb-d7e4-48b6-bea0-e9d967750770] Running
	I1104 12:12:13.483318   85759 system_pods.go:89] "kube-apiserver-embed-certs-325116" [69ad8209-af11-4479-b4f7-9991f98d74b9] Running
	I1104 12:12:13.483322   85759 system_pods.go:89] "kube-controller-manager-embed-certs-325116" [1ba1fbaf-e1e2-4ca7-aef5-84c4410143c4] Running
	I1104 12:12:13.483325   85759 system_pods.go:89] "kube-proxy-phzgx" [4ea64f2c-7568-486d-9941-f89ed4221f35] Running
	I1104 12:12:13.483329   85759 system_pods.go:89] "kube-scheduler-embed-certs-325116" [168359e4-eda1-4fb6-ab45-03e888466702] Running
	I1104 12:12:13.483336   85759 system_pods.go:89] "metrics-server-6867b74b74-knfd4" [5b3ef856-5b69-44b1-ae29-4a58bf235e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:13.483340   85759 system_pods.go:89] "storage-provisioner" [0dabcf5a-028b-4ab6-8af4-be25abaeb9b5] Running
	I1104 12:12:13.483347   85759 system_pods.go:126] duration metric: took 4.713256ms to wait for k8s-apps to be running ...
	I1104 12:12:13.483355   85759 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:13.483398   85759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:13.497748   85759 system_svc.go:56] duration metric: took 14.381722ms WaitForService to wait for kubelet
	I1104 12:12:13.497812   85759 kubeadm.go:582] duration metric: took 4m23.411218278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:13.497843   85759 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:13.500813   85759 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:13.500833   85759 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:13.500843   85759 node_conditions.go:105] duration metric: took 2.993771ms to run NodePressure ...
	I1104 12:12:13.500854   85759 start.go:241] waiting for startup goroutines ...
	I1104 12:12:13.500860   85759 start.go:246] waiting for cluster config update ...
	I1104 12:12:13.500870   85759 start.go:255] writing updated cluster config ...
	I1104 12:12:13.501122   85759 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:13.548293   85759 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:13.550203   85759 out.go:177] * Done! kubectl is now configured to use "embed-certs-325116" cluster and "default" namespace by default
	I1104 12:12:10.707746   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:12.708477   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.555266   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:13.555498   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:11.924105   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:11.936623   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:11.936685   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:11.968026   86402 cri.go:89] found id: ""
	I1104 12:12:11.968056   86402 logs.go:282] 0 containers: []
	W1104 12:12:11.968067   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:11.968074   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:11.968139   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:12.001193   86402 cri.go:89] found id: ""
	I1104 12:12:12.001218   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.001245   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:12.001252   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:12.001311   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:12.035167   86402 cri.go:89] found id: ""
	I1104 12:12:12.035190   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.035199   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:12.035204   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:12.035250   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:12.068412   86402 cri.go:89] found id: ""
	I1104 12:12:12.068440   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.068450   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:12.068458   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:12.068515   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:12.099965   86402 cri.go:89] found id: ""
	I1104 12:12:12.099991   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.100002   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:12.100009   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:12.100066   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:12.133413   86402 cri.go:89] found id: ""
	I1104 12:12:12.133442   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.133453   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:12.133460   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:12.133520   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:12.169007   86402 cri.go:89] found id: ""
	I1104 12:12:12.169036   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.169046   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:12.169053   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:12.169112   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:12.200592   86402 cri.go:89] found id: ""
	I1104 12:12:12.200621   86402 logs.go:282] 0 containers: []
	W1104 12:12:12.200635   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:12.200643   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:12.200657   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:12.244609   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:12.244644   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:12.299770   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:12.299804   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:12.324354   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:12.324395   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:12.385605   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:12.385632   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:12.385661   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:14.964867   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:14.977918   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:14.977991   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:15.012865   86402 cri.go:89] found id: ""
	I1104 12:12:15.012894   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.012906   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:15.012913   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:15.012977   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:15.046548   86402 cri.go:89] found id: ""
	I1104 12:12:15.046574   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.046583   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:15.046589   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:15.046636   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:15.079310   86402 cri.go:89] found id: ""
	I1104 12:12:15.079336   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.079347   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:15.079353   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:15.079412   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:15.110595   86402 cri.go:89] found id: ""
	I1104 12:12:15.110625   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.110636   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:15.110648   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:15.110716   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:15.143362   86402 cri.go:89] found id: ""
	I1104 12:12:15.143391   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.143403   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:15.143410   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:15.143533   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:15.173973   86402 cri.go:89] found id: ""
	I1104 12:12:15.174000   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.174009   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:15.174017   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:15.174081   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:15.205021   86402 cri.go:89] found id: ""
	I1104 12:12:15.205049   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.205060   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:15.205067   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:15.205113   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:15.240190   86402 cri.go:89] found id: ""
	I1104 12:12:15.240220   86402 logs.go:282] 0 containers: []
	W1104 12:12:15.240231   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:15.240249   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:15.240263   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:15.290208   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:15.290241   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:15.305216   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:15.305258   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:15.375713   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:15.375735   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:15.375746   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:15.456517   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:15.456552   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:15.209380   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:17.708299   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:16.056359   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:18.556166   86301 pod_ready.go:103] pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.050834   86301 pod_ready.go:82] duration metric: took 4m0.001048639s for pod "metrics-server-6867b74b74-2wl94" in "kube-system" namespace to be "Ready" ...
	E1104 12:12:20.050863   86301 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:12:20.050874   86301 pod_ready.go:39] duration metric: took 4m5.585310983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:12:20.050889   86301 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:12:20.050919   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:20.050968   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:20.088440   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.088466   86301 cri.go:89] found id: ""
	I1104 12:12:20.088476   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:20.088523   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.092502   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:20.092575   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:20.126599   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:20.126621   86301 cri.go:89] found id: ""
	I1104 12:12:20.126629   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:20.126687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.130617   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:20.130686   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:20.169664   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.169687   86301 cri.go:89] found id: ""
	I1104 12:12:20.169696   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:20.169750   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.173881   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:20.173920   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:20.209271   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.209292   86301 cri.go:89] found id: ""
	I1104 12:12:20.209299   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:20.209354   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.214187   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:20.214254   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:20.248683   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.248702   86301 cri.go:89] found id: ""
	I1104 12:12:20.248709   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:20.248757   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.252501   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:20.252574   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:20.286367   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:20.286406   86301 cri.go:89] found id: ""
	I1104 12:12:20.286415   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:20.286491   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:17.992855   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:18.011370   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:18.011446   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:18.054937   86402 cri.go:89] found id: ""
	I1104 12:12:18.054961   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.054968   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:18.054974   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:18.055026   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:18.107769   86402 cri.go:89] found id: ""
	I1104 12:12:18.107802   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.107814   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:18.107821   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:18.107887   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:18.141932   86402 cri.go:89] found id: ""
	I1104 12:12:18.141959   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.141968   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:18.141974   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:18.142021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:18.174322   86402 cri.go:89] found id: ""
	I1104 12:12:18.174345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.174353   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:18.174361   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:18.174514   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:18.206742   86402 cri.go:89] found id: ""
	I1104 12:12:18.206766   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.206776   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:18.206782   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:18.206840   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:18.240322   86402 cri.go:89] found id: ""
	I1104 12:12:18.240345   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.240358   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:18.240363   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:18.240420   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:18.272081   86402 cri.go:89] found id: ""
	I1104 12:12:18.272110   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.272121   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:18.272128   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:18.272211   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:18.308604   86402 cri.go:89] found id: ""
	I1104 12:12:18.308629   86402 logs.go:282] 0 containers: []
	W1104 12:12:18.308637   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:18.308646   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:18.308655   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:18.392854   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:18.392892   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:18.429632   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:18.429665   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:18.481082   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:18.481120   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:18.494730   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:18.494758   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:18.562098   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.063223   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:21.075655   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:21.075714   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:21.117762   86402 cri.go:89] found id: ""
	I1104 12:12:21.117794   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.117807   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:12:21.117817   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:21.117881   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:21.153256   86402 cri.go:89] found id: ""
	I1104 12:12:21.153281   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.153289   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:12:21.153295   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:21.153355   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:21.191477   86402 cri.go:89] found id: ""
	I1104 12:12:21.191519   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.191539   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:12:21.191547   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:21.191618   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:21.228378   86402 cri.go:89] found id: ""
	I1104 12:12:21.228411   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.228424   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:12:21.228431   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:21.228495   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:21.265452   86402 cri.go:89] found id: ""
	I1104 12:12:21.265483   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.265493   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:12:21.265501   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:21.265561   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:21.301073   86402 cri.go:89] found id: ""
	I1104 12:12:21.301099   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.301108   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:12:21.301114   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:21.301182   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:21.337952   86402 cri.go:89] found id: ""
	I1104 12:12:21.337977   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.337986   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:21.337996   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:12:21.338053   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:12:21.371895   86402 cri.go:89] found id: ""
	I1104 12:12:21.371920   86402 logs.go:282] 0 containers: []
	W1104 12:12:21.371929   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:12:21.371937   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:21.371950   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:21.429757   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:21.429789   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:21.444365   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.444418   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:12:21.510971   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:12:21.510990   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:21.511002   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.593605   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:12:21.593639   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.208004   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:22.706901   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:24.708795   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:20.290832   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:20.290885   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:20.324359   86301 cri.go:89] found id: ""
	I1104 12:12:20.324383   86301 logs.go:282] 0 containers: []
	W1104 12:12:20.324391   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:20.324397   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:20.324442   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:20.364466   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.364488   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:20.364492   86301 cri.go:89] found id: ""
	I1104 12:12:20.364500   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:20.364557   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.368440   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:20.371967   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:20.371991   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:20.405547   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:20.405572   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:20.446936   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:20.446962   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:20.485811   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:20.485838   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:20.530775   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:20.530803   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:20.599495   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:20.599542   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:20.614511   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:20.614543   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:20.659277   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:20.659316   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:20.694675   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:20.694707   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:21.187670   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:21.187705   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:21.308477   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:21.308501   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:21.365526   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:21.365562   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:21.431350   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:21.431381   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:23.969966   86301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:23.984866   86301 api_server.go:72] duration metric: took 4m16.75797908s to wait for apiserver process to appear ...
	I1104 12:12:23.984895   86301 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:12:23.984937   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:23.984989   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:24.022326   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.022348   86301 cri.go:89] found id: ""
	I1104 12:12:24.022357   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:24.022428   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.027288   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:24.027377   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:24.064963   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.064986   86301 cri.go:89] found id: ""
	I1104 12:12:24.064993   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:24.065045   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.072027   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:24.072089   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:24.106618   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.106648   86301 cri.go:89] found id: ""
	I1104 12:12:24.106659   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:24.106719   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.110696   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:24.110762   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:24.148575   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:24.148600   86301 cri.go:89] found id: ""
	I1104 12:12:24.148621   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:24.148687   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.152673   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:24.152741   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:24.187739   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:24.187763   86301 cri.go:89] found id: ""
	I1104 12:12:24.187771   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:24.187817   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.191551   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:24.191610   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:24.229634   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.229656   86301 cri.go:89] found id: ""
	I1104 12:12:24.229667   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:24.229720   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.234342   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:24.234426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:24.268339   86301 cri.go:89] found id: ""
	I1104 12:12:24.268363   86301 logs.go:282] 0 containers: []
	W1104 12:12:24.268370   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:24.268375   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:24.268426   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:24.302347   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.302369   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.302374   86301 cri.go:89] found id: ""
	I1104 12:12:24.302382   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:24.302446   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.306761   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:24.310867   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:24.310888   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:24.353396   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:24.353421   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:24.408025   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:24.408054   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:24.446150   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:24.446177   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:24.495479   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:24.495505   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:24.568973   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:24.569008   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:24.585522   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:24.585552   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:24.630483   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:24.630516   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:24.675828   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:24.675865   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:25.094412   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:25.094457   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:25.191547   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:25.191576   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:25.227482   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:25.227509   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:25.261150   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:25.261184   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:24.130961   86402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:12:24.143387   86402 kubeadm.go:597] duration metric: took 4m4.25221988s to restartPrimaryControlPlane
	W1104 12:12:24.143472   86402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1104 12:12:24.143499   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:12:27.207964   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:29.208705   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:27.799329   86301 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8444/healthz ...
	I1104 12:12:27.803543   86301 api_server.go:279] https://192.168.72.130:8444/healthz returned 200:
	ok
	I1104 12:12:27.804545   86301 api_server.go:141] control plane version: v1.31.2
	I1104 12:12:27.804568   86301 api_server.go:131] duration metric: took 3.819666619s to wait for apiserver health ...
	I1104 12:12:27.804576   86301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:12:27.804596   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:12:27.804639   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:12:27.842317   86301 cri.go:89] found id: "2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:27.842339   86301 cri.go:89] found id: ""
	I1104 12:12:27.842348   86301 logs.go:282] 1 containers: [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a]
	I1104 12:12:27.842403   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.846107   86301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:12:27.846167   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:12:27.878833   86301 cri.go:89] found id: "1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:27.878854   86301 cri.go:89] found id: ""
	I1104 12:12:27.878864   86301 logs.go:282] 1 containers: [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7]
	I1104 12:12:27.878923   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.882562   86301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:12:27.882614   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:12:27.914077   86301 cri.go:89] found id: "51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:27.914098   86301 cri.go:89] found id: ""
	I1104 12:12:27.914106   86301 logs.go:282] 1 containers: [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1]
	I1104 12:12:27.914150   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.917756   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:12:27.917807   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:12:27.949534   86301 cri.go:89] found id: "c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:27.949555   86301 cri.go:89] found id: ""
	I1104 12:12:27.949562   86301 logs.go:282] 1 containers: [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07]
	I1104 12:12:27.949606   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.953176   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:12:27.953235   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:12:27.984491   86301 cri.go:89] found id: "9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:27.984509   86301 cri.go:89] found id: ""
	I1104 12:12:27.984516   86301 logs.go:282] 1 containers: [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4]
	I1104 12:12:27.984566   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:27.988283   86301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:12:27.988342   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:12:28.022752   86301 cri.go:89] found id: "1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.022775   86301 cri.go:89] found id: ""
	I1104 12:12:28.022783   86301 logs.go:282] 1 containers: [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e]
	I1104 12:12:28.022829   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.026702   86301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:12:28.026767   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:12:28.062501   86301 cri.go:89] found id: ""
	I1104 12:12:28.062534   86301 logs.go:282] 0 containers: []
	W1104 12:12:28.062545   86301 logs.go:284] No container was found matching "kindnet"
	I1104 12:12:28.062556   86301 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:12:28.062608   86301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:12:28.097167   86301 cri.go:89] found id: "9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.097195   86301 cri.go:89] found id: "f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.097201   86301 cri.go:89] found id: ""
	I1104 12:12:28.097211   86301 logs.go:282] 2 containers: [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823]
	I1104 12:12:28.097276   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.101192   86301 ssh_runner.go:195] Run: which crictl
	I1104 12:12:28.104712   86301 logs.go:123] Gathering logs for dmesg ...
	I1104 12:12:28.104731   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:12:28.118886   86301 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:12:28.118911   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:12:28.220480   86301 logs.go:123] Gathering logs for etcd [1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7] ...
	I1104 12:12:28.220512   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bc906f9e4e940157a2bb10397ebc3ba90f93123792d5de597633f6c5b3c64d7"
	I1104 12:12:28.264205   86301 logs.go:123] Gathering logs for coredns [51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1] ...
	I1104 12:12:28.264239   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51442200af1bb501b83cd7280eeee93590e5cb241e91cf5af1608a6eccfdf5a1"
	I1104 12:12:28.299241   86301 logs.go:123] Gathering logs for kube-scheduler [c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07] ...
	I1104 12:12:28.299274   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c33ea99d25624abd85dcac43e17e1dd6dcea3a7b6333e91e8dc99ad02c037e07"
	I1104 12:12:28.339817   86301 logs.go:123] Gathering logs for kube-proxy [9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4] ...
	I1104 12:12:28.339847   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e60ae78d5610ae63ce7016b58d60da05791a055324eea67efd0feb374bdd4b4"
	I1104 12:12:28.377987   86301 logs.go:123] Gathering logs for container status ...
	I1104 12:12:28.378014   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:12:28.416746   86301 logs.go:123] Gathering logs for kubelet ...
	I1104 12:12:28.416772   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:12:28.484743   86301 logs.go:123] Gathering logs for kube-apiserver [2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a] ...
	I1104 12:12:28.484777   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1787441f88b7be6fbfb08abfa621dbc26e984bbd40544c92bf02bae7c7709a"
	I1104 12:12:28.532089   86301 logs.go:123] Gathering logs for kube-controller-manager [1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e] ...
	I1104 12:12:28.532128   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1346cefb5059440d0c41bba9a5526748e6c783b8f935f34dcf419f728abcd35e"
	I1104 12:12:28.589039   86301 logs.go:123] Gathering logs for storage-provisioner [9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516] ...
	I1104 12:12:28.589072   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e9ecf7280a07a47b274097b461b9f3467e388751471e9da1adfae3166380516"
	I1104 12:12:28.623955   86301 logs.go:123] Gathering logs for storage-provisioner [f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823] ...
	I1104 12:12:28.623987   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d8096ede6a877ac87491c90844f6cccb8fad4309f4a01e86bf1f7e3a4c9823"
	I1104 12:12:28.657953   86301 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:12:28.657986   86301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:12:31.547595   86301 system_pods.go:59] 8 kube-system pods found
	I1104 12:12:31.547624   86301 system_pods.go:61] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.547629   86301 system_pods.go:61] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.547633   86301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.547637   86301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.547640   86301 system_pods.go:61] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.547643   86301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.547649   86301 system_pods.go:61] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.547653   86301 system_pods.go:61] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.547661   86301 system_pods.go:74] duration metric: took 3.743079115s to wait for pod list to return data ...
	I1104 12:12:31.547667   86301 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:12:31.550088   86301 default_sa.go:45] found service account: "default"
	I1104 12:12:31.550108   86301 default_sa.go:55] duration metric: took 2.435317ms for default service account to be created ...
	I1104 12:12:31.550114   86301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:12:31.554898   86301 system_pods.go:86] 8 kube-system pods found
	I1104 12:12:31.554924   86301 system_pods.go:89] "coredns-7c65d6cfc9-zw2tv" [71ce75a4-f051-4014-9ed0-7b275ea940a9] Running
	I1104 12:12:31.554929   86301 system_pods.go:89] "etcd-default-k8s-diff-port-036892" [7e46d97c-96b5-4301-b98a-f33b69937411] Running
	I1104 12:12:31.554933   86301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-036892" [483cebd0-7ceb-4bf4-b1f9-e33be61b136e] Running
	I1104 12:12:31.554937   86301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-036892" [c2dc4343-177a-4a4c-8a25-47078ec664f1] Running
	I1104 12:12:31.554941   86301 system_pods.go:89] "kube-proxy-j2srm" [9450cebd-aefb-4f1a-bb99-7d1dab054dd7] Running
	I1104 12:12:31.554945   86301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-036892" [505d8202-5e02-4abd-9eff-163810a91eb2] Running
	I1104 12:12:31.554952   86301 system_pods.go:89] "metrics-server-6867b74b74-2wl94" [7f7cc9c1-420c-480e-b6b7-1a2027bf2f9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:12:31.554955   86301 system_pods.go:89] "storage-provisioner" [18745f89-fc15-4a4c-b68b-7e80cd4f393b] Running
	I1104 12:12:31.554962   86301 system_pods.go:126] duration metric: took 4.842911ms to wait for k8s-apps to be running ...
	I1104 12:12:31.554968   86301 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:12:31.555008   86301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:31.568927   86301 system_svc.go:56] duration metric: took 13.948557ms WaitForService to wait for kubelet
	I1104 12:12:31.568958   86301 kubeadm.go:582] duration metric: took 4m24.342075873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:12:31.568987   86301 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:12:31.571962   86301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:12:31.571983   86301 node_conditions.go:123] node cpu capacity is 2
	I1104 12:12:31.571993   86301 node_conditions.go:105] duration metric: took 3.000591ms to run NodePressure ...
	I1104 12:12:31.572004   86301 start.go:241] waiting for startup goroutines ...
	I1104 12:12:31.572010   86301 start.go:246] waiting for cluster config update ...
	I1104 12:12:31.572019   86301 start.go:255] writing updated cluster config ...
	I1104 12:12:31.572277   86301 ssh_runner.go:195] Run: rm -f paused
	I1104 12:12:31.620935   86301 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:12:31.623672   86301 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-036892" cluster and "default" namespace by default
	I1104 12:12:28.876306   86402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.732783523s)
	I1104 12:12:28.876377   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:12:28.890455   86402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1104 12:12:28.899660   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:12:28.908658   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:12:28.908675   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:12:28.908715   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:12:28.916955   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:12:28.917013   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:12:28.927198   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:12:28.936868   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:12:28.936924   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:12:28.947246   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.956962   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:12:28.957015   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:12:28.967293   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:12:28.976975   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:12:28.977030   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:12:28.988547   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:12:29.198333   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:12:31.709511   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:34.207341   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:36.707962   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:39.208138   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:41.208806   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:43.707896   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:46.207316   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:48.707107   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:50.707644   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:52.708268   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:54.708517   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:57.206564   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:12:59.207122   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:01.207195   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:03.207617   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:05.707763   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:07.708314   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:09.708374   85500 pod_ready.go:103] pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace has status "Ready":"False"
	I1104 12:13:10.702085   85500 pod_ready.go:82] duration metric: took 4m0.000587313s for pod "metrics-server-6867b74b74-2lxlg" in "kube-system" namespace to be "Ready" ...
	E1104 12:13:10.702115   85500 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1104 12:13:10.702126   85500 pod_ready.go:39] duration metric: took 4m5.542549912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1104 12:13:10.702144   85500 api_server.go:52] waiting for apiserver process to appear ...
	I1104 12:13:10.702191   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:10.702246   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:10.743079   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:10.743102   85500 cri.go:89] found id: ""
	I1104 12:13:10.743110   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:10.743176   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.747213   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:10.747275   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:10.781435   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:10.781465   85500 cri.go:89] found id: ""
	I1104 12:13:10.781474   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:10.781597   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.785383   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:10.785453   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:10.825927   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:10.825956   85500 cri.go:89] found id: ""
	I1104 12:13:10.825965   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:10.826023   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.829834   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:10.829899   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:10.872447   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:10.872468   85500 cri.go:89] found id: ""
	I1104 12:13:10.872475   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:10.872524   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.876428   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:10.876483   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:10.911092   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:10.911125   85500 cri.go:89] found id: ""
	I1104 12:13:10.911134   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:10.911190   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.915021   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:10.915076   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:10.950838   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:10.950863   85500 cri.go:89] found id: ""
	I1104 12:13:10.950873   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:10.950935   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:10.954889   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:10.954938   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:10.991580   85500 cri.go:89] found id: ""
	I1104 12:13:10.991609   85500 logs.go:282] 0 containers: []
	W1104 12:13:10.991618   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:10.991625   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:10.991689   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:11.031428   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.031469   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.031474   85500 cri.go:89] found id: ""
	I1104 12:13:11.031484   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:11.031557   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.035810   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:11.039555   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:11.039582   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:11.076837   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:11.076865   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:11.114534   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:11.114561   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:11.148897   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:11.148935   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:11.184480   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:11.184511   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:11.256197   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:11.256237   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:11.368984   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:11.369014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:11.414219   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:11.414253   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:11.455746   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:11.455776   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:11.491699   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:11.491726   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:11.962368   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:11.962400   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:11.975564   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:11.975590   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:12.031427   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:12.031461   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:14.572933   85500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 12:13:14.588140   85500 api_server.go:72] duration metric: took 4m17.141131339s to wait for apiserver process to appear ...
	I1104 12:13:14.588168   85500 api_server.go:88] waiting for apiserver healthz status ...
	I1104 12:13:14.588196   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:14.588243   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:14.621509   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:14.621534   85500 cri.go:89] found id: ""
	I1104 12:13:14.621543   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:14.621601   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.626328   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:14.626384   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:14.662052   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:14.662079   85500 cri.go:89] found id: ""
	I1104 12:13:14.662115   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:14.662174   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.666018   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:14.666089   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:14.702872   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:14.702897   85500 cri.go:89] found id: ""
	I1104 12:13:14.702910   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:14.702968   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.706809   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:14.706883   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:14.744985   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:14.745005   85500 cri.go:89] found id: ""
	I1104 12:13:14.745012   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:14.745058   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.749441   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:14.749497   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:14.781617   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:14.781644   85500 cri.go:89] found id: ""
	I1104 12:13:14.781653   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:14.781709   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.785971   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:14.786046   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:14.819002   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:14.819029   85500 cri.go:89] found id: ""
	I1104 12:13:14.819038   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:14.819101   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.823075   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:14.823143   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:14.858936   85500 cri.go:89] found id: ""
	I1104 12:13:14.858965   85500 logs.go:282] 0 containers: []
	W1104 12:13:14.858977   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:14.858984   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:14.859048   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:14.898303   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:14.898327   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:14.898333   85500 cri.go:89] found id: ""
	I1104 12:13:14.898341   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:14.898402   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.902325   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:14.905855   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:14.905880   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:14.973356   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:14.973389   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:14.988655   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:14.988696   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:15.023407   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:15.023443   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:15.078974   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:15.079007   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:15.114147   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:15.114180   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:15.559434   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:15.559477   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:15.666481   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:15.666509   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:15.728066   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:15.728101   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:15.769721   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:15.769759   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:15.802131   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:15.802170   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:15.837613   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:15.837639   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:15.874374   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:15.874407   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:18.413199   85500 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I1104 12:13:18.418522   85500 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I1104 12:13:18.419487   85500 api_server.go:141] control plane version: v1.31.2
	I1104 12:13:18.419512   85500 api_server.go:131] duration metric: took 3.831337085s to wait for apiserver health ...
	I1104 12:13:18.419521   85500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1104 12:13:18.419549   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:13:18.419605   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:13:18.453835   85500 cri.go:89] found id: "e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:18.453856   85500 cri.go:89] found id: ""
	I1104 12:13:18.453865   85500 logs.go:282] 1 containers: [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea]
	I1104 12:13:18.453927   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.458136   85500 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:13:18.458198   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:13:18.496587   85500 cri.go:89] found id: "1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:18.496623   85500 cri.go:89] found id: ""
	I1104 12:13:18.496634   85500 logs.go:282] 1 containers: [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82]
	I1104 12:13:18.496691   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.500451   85500 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:13:18.500523   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:13:18.532756   85500 cri.go:89] found id: "6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:18.532785   85500 cri.go:89] found id: ""
	I1104 12:13:18.532795   85500 logs.go:282] 1 containers: [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de]
	I1104 12:13:18.532857   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.537239   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:13:18.537293   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:13:18.569348   85500 cri.go:89] found id: "5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:18.569374   85500 cri.go:89] found id: ""
	I1104 12:13:18.569382   85500 logs.go:282] 1 containers: [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456]
	I1104 12:13:18.569440   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.573491   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:13:18.573563   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:13:18.606857   85500 cri.go:89] found id: "33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:18.606886   85500 cri.go:89] found id: ""
	I1104 12:13:18.606896   85500 logs.go:282] 1 containers: [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3]
	I1104 12:13:18.606951   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.611158   85500 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:13:18.611229   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:13:18.645448   85500 cri.go:89] found id: "9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:18.645467   85500 cri.go:89] found id: ""
	I1104 12:13:18.645474   85500 logs.go:282] 1 containers: [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd]
	I1104 12:13:18.645527   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.649014   85500 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:13:18.649062   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:13:18.693641   85500 cri.go:89] found id: ""
	I1104 12:13:18.693668   85500 logs.go:282] 0 containers: []
	W1104 12:13:18.693676   85500 logs.go:284] No container was found matching "kindnet"
	I1104 12:13:18.693681   85500 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1104 12:13:18.693728   85500 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1104 12:13:18.733668   85500 cri.go:89] found id: "d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:18.733690   85500 cri.go:89] found id: "162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:18.733695   85500 cri.go:89] found id: ""
	I1104 12:13:18.733702   85500 logs.go:282] 2 containers: [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d]
	I1104 12:13:18.733745   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.737419   85500 ssh_runner.go:195] Run: which crictl
	I1104 12:13:18.740993   85500 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:13:18.741014   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1104 12:13:19.135942   85500 logs.go:123] Gathering logs for kubelet ...
	I1104 12:13:19.135980   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:13:19.206586   85500 logs.go:123] Gathering logs for dmesg ...
	I1104 12:13:19.206623   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:13:19.222135   85500 logs.go:123] Gathering logs for etcd [1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82] ...
	I1104 12:13:19.222164   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1390676564c7e4039f73139668fc6a3c321c186b612f1f1837e21788a5c0aa82"
	I1104 12:13:19.262746   85500 logs.go:123] Gathering logs for kube-scheduler [5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456] ...
	I1104 12:13:19.262774   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5546d06c4d51e3fa5699a7351286904174305e9c759397f8d2f640c6ce17d456"
	I1104 12:13:19.298259   85500 logs.go:123] Gathering logs for kube-proxy [33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3] ...
	I1104 12:13:19.298287   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33418a9cb2f8aea69efe66db3f38e3a66b699fea5455b72e6b94484067b704a3"
	I1104 12:13:19.338304   85500 logs.go:123] Gathering logs for storage-provisioner [d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41] ...
	I1104 12:13:19.338332   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f6c824f92eecdb4ebc13e783585c25292f412e0f0a0d7b9fd0eab092fa8e41"
	I1104 12:13:19.375163   85500 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:13:19.375195   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1104 12:13:19.478206   85500 logs.go:123] Gathering logs for kube-apiserver [e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea] ...
	I1104 12:13:19.478234   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e74398c77b3ca0adb85872c2f97209b51f4c13968147f500a41590f72d758dea"
	I1104 12:13:19.526261   85500 logs.go:123] Gathering logs for coredns [6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de] ...
	I1104 12:13:19.526291   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcd13443296397949a6314fd6007ac667c05b70bd43f14e2e9b54f3313440de"
	I1104 12:13:19.559922   85500 logs.go:123] Gathering logs for kube-controller-manager [9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd] ...
	I1104 12:13:19.559954   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c3fa7870c72468d3a6283be42fb2724275d8c5937f6728c8bc97103c58b2ebd"
	I1104 12:13:19.609848   85500 logs.go:123] Gathering logs for storage-provisioner [162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d] ...
	I1104 12:13:19.609879   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162e3330ff77f795d15e11b3abf1d3019490bff78148c8d6b470ef1507dcb67d"
	I1104 12:13:19.648804   85500 logs.go:123] Gathering logs for container status ...
	I1104 12:13:19.648829   85500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:13:22.210690   85500 system_pods.go:59] 8 kube-system pods found
	I1104 12:13:22.210718   85500 system_pods.go:61] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.210723   85500 system_pods.go:61] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.210727   85500 system_pods.go:61] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.210730   85500 system_pods.go:61] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.210733   85500 system_pods.go:61] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.210737   85500 system_pods.go:61] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.210752   85500 system_pods.go:61] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.210758   85500 system_pods.go:61] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.210768   85500 system_pods.go:74] duration metric: took 3.791240483s to wait for pod list to return data ...
	I1104 12:13:22.210780   85500 default_sa.go:34] waiting for default service account to be created ...
	I1104 12:13:22.213688   85500 default_sa.go:45] found service account: "default"
	I1104 12:13:22.213709   85500 default_sa.go:55] duration metric: took 2.921691ms for default service account to be created ...
	I1104 12:13:22.213717   85500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1104 12:13:22.219436   85500 system_pods.go:86] 8 kube-system pods found
	I1104 12:13:22.219466   85500 system_pods.go:89] "coredns-7c65d6cfc9-vv4kq" [f2518f86-9653-4e98-9193-9d2a76838117] Running
	I1104 12:13:22.219475   85500 system_pods.go:89] "etcd-no-preload-908370" [cc23ebc2-c49f-403c-8128-98bb08459592] Running
	I1104 12:13:22.219480   85500 system_pods.go:89] "kube-apiserver-no-preload-908370" [37532b3e-f683-4420-a5e4-280744f2bdf9] Running
	I1104 12:13:22.219489   85500 system_pods.go:89] "kube-controller-manager-no-preload-908370" [81d30255-758e-4661-bec2-c6aa6773923a] Running
	I1104 12:13:22.219495   85500 system_pods.go:89] "kube-proxy-w9hbz" [9d494697-ff2b-4600-9c11-b704de9be2a3] Running
	I1104 12:13:22.219501   85500 system_pods.go:89] "kube-scheduler-no-preload-908370" [9b0ff34e-1795-4f7c-b511-822a02c4af7b] Running
	I1104 12:13:22.219512   85500 system_pods.go:89] "metrics-server-6867b74b74-2lxlg" [bf328856-ad19-47b3-a40d-282cd4fdec4b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1104 12:13:22.219523   85500 system_pods.go:89] "storage-provisioner" [d11c9416-6236-4c81-9626-d5e040acea8a] Running
	I1104 12:13:22.219537   85500 system_pods.go:126] duration metric: took 5.813462ms to wait for k8s-apps to be running ...
	I1104 12:13:22.219551   85500 system_svc.go:44] waiting for kubelet service to be running ....
	I1104 12:13:22.219612   85500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:13:22.232887   85500 system_svc.go:56] duration metric: took 13.328078ms WaitForService to wait for kubelet
	I1104 12:13:22.232918   85500 kubeadm.go:582] duration metric: took 4m24.785911082s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1104 12:13:22.232941   85500 node_conditions.go:102] verifying NodePressure condition ...
	I1104 12:13:22.235641   85500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1104 12:13:22.235662   85500 node_conditions.go:123] node cpu capacity is 2
	I1104 12:13:22.235675   85500 node_conditions.go:105] duration metric: took 2.728232ms to run NodePressure ...
	I1104 12:13:22.235687   85500 start.go:241] waiting for startup goroutines ...
	I1104 12:13:22.235695   85500 start.go:246] waiting for cluster config update ...
	I1104 12:13:22.235707   85500 start.go:255] writing updated cluster config ...
	I1104 12:13:22.235962   85500 ssh_runner.go:195] Run: rm -f paused
	I1104 12:13:22.284583   85500 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1104 12:13:22.287448   85500 out.go:177] * Done! kubectl is now configured to use "no-preload-908370" cluster and "default" namespace by default
	I1104 12:14:25.090113   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:14:25.090254   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:14:25.091997   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.092065   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.092204   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.092341   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.092480   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:25.092569   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:25.094485   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:25.094582   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:25.094664   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:25.094799   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:25.094891   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:25.095003   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:25.095086   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:25.095186   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:25.095240   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:25.095319   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:25.095403   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:25.095481   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:25.095554   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:25.095614   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:25.095676   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:25.095752   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:25.095828   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:25.095970   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:25.096102   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:25.096169   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:25.096262   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:25.097799   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:25.097920   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:25.098018   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:25.098126   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:25.098211   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:25.098333   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:14:25.098393   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:14:25.098487   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098633   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.098690   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.098940   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099074   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099307   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099370   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099528   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099582   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:14:25.099740   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:14:25.099758   86402 kubeadm.go:310] 
	I1104 12:14:25.099815   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:14:25.099880   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:14:25.099889   86402 kubeadm.go:310] 
	I1104 12:14:25.099923   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:14:25.099952   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:14:25.100036   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:14:25.100044   86402 kubeadm.go:310] 
	I1104 12:14:25.100197   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:14:25.100237   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:14:25.100267   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:14:25.100273   86402 kubeadm.go:310] 
	I1104 12:14:25.100367   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:14:25.100454   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:14:25.100468   86402 kubeadm.go:310] 
	I1104 12:14:25.100600   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:14:25.100718   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:14:25.100821   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:14:25.100903   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:14:25.100970   86402 kubeadm.go:310] 
	W1104 12:14:25.101033   86402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1104 12:14:25.101071   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1104 12:14:25.536184   86402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 12:14:25.550453   86402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1104 12:14:25.560308   86402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1104 12:14:25.560327   86402 kubeadm.go:157] found existing configuration files:
	
	I1104 12:14:25.560368   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1104 12:14:25.569106   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1104 12:14:25.569189   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1104 12:14:25.578395   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1104 12:14:25.587402   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1104 12:14:25.587473   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1104 12:14:25.596827   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.605359   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1104 12:14:25.605420   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1104 12:14:25.614266   86402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1104 12:14:25.622522   86402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1104 12:14:25.622582   86402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1104 12:14:25.631876   86402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1104 12:14:25.701080   86402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1104 12:14:25.701168   86402 kubeadm.go:310] [preflight] Running pre-flight checks
	I1104 12:14:25.833997   86402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1104 12:14:25.834138   86402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1104 12:14:25.834258   86402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1104 12:14:26.009165   86402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1104 12:14:26.011976   86402 out.go:235]   - Generating certificates and keys ...
	I1104 12:14:26.012090   86402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1104 12:14:26.012183   86402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1104 12:14:26.012333   86402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1104 12:14:26.012422   86402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1104 12:14:26.012532   86402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1104 12:14:26.012619   86402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1104 12:14:26.012689   86402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1104 12:14:26.012748   86402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1104 12:14:26.012851   86402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1104 12:14:26.012978   86402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1104 12:14:26.013025   86402 kubeadm.go:310] [certs] Using the existing "sa" key
	I1104 12:14:26.013102   86402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1104 12:14:26.399153   86402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1104 12:14:26.470449   86402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1104 12:14:27.078991   86402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1104 12:14:27.181622   86402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1104 12:14:27.205149   86402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1104 12:14:27.205300   86402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1104 12:14:27.205383   86402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1104 12:14:27.355614   86402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1104 12:14:27.357678   86402 out.go:235]   - Booting up control plane ...
	I1104 12:14:27.357840   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1104 12:14:27.363942   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1104 12:14:27.365004   86402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1104 12:14:27.367237   86402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1104 12:14:27.368087   86402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1104 12:15:07.369845   86402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1104 12:15:07.370222   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:07.370464   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:12.370802   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:12.371041   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:22.371417   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:22.371584   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:15:42.371725   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:15:42.371932   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.370871   86402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1104 12:16:22.371150   86402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1104 12:16:22.371181   86402 kubeadm.go:310] 
	I1104 12:16:22.371222   86402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1104 12:16:22.371297   86402 kubeadm.go:310] 		timed out waiting for the condition
	I1104 12:16:22.371309   86402 kubeadm.go:310] 
	I1104 12:16:22.371371   86402 kubeadm.go:310] 	This error is likely caused by:
	I1104 12:16:22.371435   86402 kubeadm.go:310] 		- The kubelet is not running
	I1104 12:16:22.371576   86402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1104 12:16:22.371588   86402 kubeadm.go:310] 
	I1104 12:16:22.371726   86402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1104 12:16:22.371784   86402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1104 12:16:22.371863   86402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1104 12:16:22.371879   86402 kubeadm.go:310] 
	I1104 12:16:22.372004   86402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1104 12:16:22.372155   86402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1104 12:16:22.372172   86402 kubeadm.go:310] 
	I1104 12:16:22.372338   86402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1104 12:16:22.372435   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1104 12:16:22.372566   86402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1104 12:16:22.372680   86402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1104 12:16:22.372718   86402 kubeadm.go:310] 
	I1104 12:16:22.372948   86402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1104 12:16:22.373110   86402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1104 12:16:22.373289   86402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1104 12:16:22.373328   86402 kubeadm.go:394] duration metric: took 8m2.53443537s to StartCluster
	I1104 12:16:22.373379   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1104 12:16:22.373431   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1104 12:16:22.410373   86402 cri.go:89] found id: ""
	I1104 12:16:22.410409   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.410418   86402 logs.go:284] No container was found matching "kube-apiserver"
	I1104 12:16:22.410424   86402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1104 12:16:22.410485   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1104 12:16:22.447939   86402 cri.go:89] found id: ""
	I1104 12:16:22.447963   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.447971   86402 logs.go:284] No container was found matching "etcd"
	I1104 12:16:22.447977   86402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1104 12:16:22.448021   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1104 12:16:22.479234   86402 cri.go:89] found id: ""
	I1104 12:16:22.479263   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.479274   86402 logs.go:284] No container was found matching "coredns"
	I1104 12:16:22.479280   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1104 12:16:22.479341   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1104 12:16:22.512783   86402 cri.go:89] found id: ""
	I1104 12:16:22.512814   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.512825   86402 logs.go:284] No container was found matching "kube-scheduler"
	I1104 12:16:22.512832   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1104 12:16:22.512895   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1104 12:16:22.549483   86402 cri.go:89] found id: ""
	I1104 12:16:22.549510   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.549520   86402 logs.go:284] No container was found matching "kube-proxy"
	I1104 12:16:22.549527   86402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1104 12:16:22.549593   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1104 12:16:22.582339   86402 cri.go:89] found id: ""
	I1104 12:16:22.582382   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.582393   86402 logs.go:284] No container was found matching "kube-controller-manager"
	I1104 12:16:22.582402   86402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1104 12:16:22.582471   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1104 12:16:22.613545   86402 cri.go:89] found id: ""
	I1104 12:16:22.613574   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.613585   86402 logs.go:284] No container was found matching "kindnet"
	I1104 12:16:22.613593   86402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1104 12:16:22.613656   86402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1104 12:16:22.644488   86402 cri.go:89] found id: ""
	I1104 12:16:22.644517   86402 logs.go:282] 0 containers: []
	W1104 12:16:22.644528   86402 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1104 12:16:22.644539   86402 logs.go:123] Gathering logs for container status ...
	I1104 12:16:22.644551   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1104 12:16:22.681138   86402 logs.go:123] Gathering logs for kubelet ...
	I1104 12:16:22.681169   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1104 12:16:22.734551   86402 logs.go:123] Gathering logs for dmesg ...
	I1104 12:16:22.734586   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1104 12:16:22.750140   86402 logs.go:123] Gathering logs for describe nodes ...
	I1104 12:16:22.750178   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1104 12:16:22.837631   86402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1104 12:16:22.837657   86402 logs.go:123] Gathering logs for CRI-O ...
	I1104 12:16:22.837673   86402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1104 12:16:22.961154   86402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1104 12:16:22.961221   86402 out.go:270] * 
	W1104 12:16:22.961295   86402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.961310   86402 out.go:270] * 
	W1104 12:16:22.962053   86402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1104 12:16:22.965021   86402 out.go:201] 
	W1104 12:16:22.966262   86402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1104 12:16:22.966326   86402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1104 12:16:22.966377   86402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1104 12:16:22.967953   86402 out.go:201] 
	
	
	==> CRI-O <==
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.256351484Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723303256329477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be6a5eaf-a347-4430-9f7f-13b7d4f67146 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.256975216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1627417-5061-4102-b34d-90a1b08ec0f4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.257032110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1627417-5061-4102-b34d-90a1b08ec0f4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.257063886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d1627417-5061-4102-b34d-90a1b08ec0f4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.287180673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c3499c3-ee3f-4bef-bb10-bf9b0823b53c name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.287249210Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c3499c3-ee3f-4bef-bb10-bf9b0823b53c name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.288129654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6fcc710-c4b7-4530-8501-54d98b55bac9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.288510561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723303288484089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6fcc710-c4b7-4530-8501-54d98b55bac9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.288902934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=806a409d-6ff4-4a3a-b38f-cff122ad8a28 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.288987198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=806a409d-6ff4-4a3a-b38f-cff122ad8a28 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.289022761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=806a409d-6ff4-4a3a-b38f-cff122ad8a28 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.318621288Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53d81d8e-b9d6-415c-b977-ff88299c5ac0 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.318708557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53d81d8e-b9d6-415c-b977-ff88299c5ac0 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.319635100Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed616b92-842b-4394-aee7-643c7230819b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.319984744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723303319963220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed616b92-842b-4394-aee7-643c7230819b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.320498717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=146edbc1-1910-44f1-a9f8-ed301c2aacfb name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.320695649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=146edbc1-1910-44f1-a9f8-ed301c2aacfb name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.320745081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=146edbc1-1910-44f1-a9f8-ed301c2aacfb name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.349855845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=515ad3c4-cda3-4a56-af6c-583d0942e295 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.349941151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=515ad3c4-cda3-4a56-af6c-583d0942e295 name=/runtime.v1.RuntimeService/Version
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.351118225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2158838f-0779-4306-b07b-b12a3462dbd1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.351527712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730723303351498778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2158838f-0779-4306-b07b-b12a3462dbd1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.352117730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11d31685-6073-4f7f-8a48-4a73f0771085 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.352187619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11d31685-6073-4f7f-8a48-4a73f0771085 name=/runtime.v1.RuntimeService/ListContainers
	Nov 04 12:28:23 old-k8s-version-589257 crio[626]: time="2024-11-04 12:28:23.352222362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=11d31685-6073-4f7f-8a48-4a73f0771085 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 4 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051714] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov 4 12:08] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.909177] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.435497] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.440051] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.115131] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.206664] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.118752] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.257608] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +6.231117] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.063384] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.883713] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[ +13.758834] kauditd_printk_skb: 46 callbacks suppressed
	[Nov 4 12:12] systemd-fstab-generator[5108]: Ignoring "noauto" option for root device
	[Nov 4 12:14] systemd-fstab-generator[5387]: Ignoring "noauto" option for root device
	[  +0.067248] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:28:23 up 20 min,  0 users,  load average: 0.09, 0.03, 0.01
	Linux old-k8s-version-589257 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006d46f0)
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c5fef0, 0x4f0ac20, 0xc000977950, 0x1, 0xc0001020c0)
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000257180, 0xc0001020c0)
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b92400, 0xc000b84b20)
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Nov 04 12:28:20 old-k8s-version-589257 kubelet[6949]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Nov 04 12:28:20 old-k8s-version-589257 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 04 12:28:20 old-k8s-version-589257 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 04 12:28:21 old-k8s-version-589257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 145.
	Nov 04 12:28:21 old-k8s-version-589257 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 04 12:28:21 old-k8s-version-589257 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 04 12:28:21 old-k8s-version-589257 kubelet[6957]: I1104 12:28:21.565435    6957 server.go:416] Version: v1.20.0
	Nov 04 12:28:21 old-k8s-version-589257 kubelet[6957]: I1104 12:28:21.565739    6957 server.go:837] Client rotation is on, will bootstrap in background
	Nov 04 12:28:21 old-k8s-version-589257 kubelet[6957]: I1104 12:28:21.567926    6957 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 04 12:28:21 old-k8s-version-589257 kubelet[6957]: I1104 12:28:21.568826    6957 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Nov 04 12:28:21 old-k8s-version-589257 kubelet[6957]: W1104 12:28:21.568865    6957 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 2 (252.214255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-589257" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (175.09s)

                                                
                                    

Test pass (252/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 43.35
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 5.78
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 101.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 125.87
31 TestAddons/serial/GCPAuth/Namespaces 1.74
32 TestAddons/serial/GCPAuth/FakeCredentials 8.47
35 TestAddons/parallel/Registry 16.4
37 TestAddons/parallel/InspektorGadget 11.67
40 TestAddons/parallel/CSI 63.12
41 TestAddons/parallel/Headlamp 17.69
42 TestAddons/parallel/CloudSpanner 6.5
43 TestAddons/parallel/LocalPath 60
44 TestAddons/parallel/NvidiaDevicePlugin 6.47
45 TestAddons/parallel/Yakd 11.69
48 TestCertOptions 59.02
49 TestCertExpiration 307.15
51 TestForceSystemdFlag 61.47
52 TestForceSystemdEnv 40.18
54 TestKVMDriverInstallOrUpdate 3.77
58 TestErrorSpam/setup 40.62
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.75
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.66
63 TestErrorSpam/stop 5.06
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 54.95
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 34.35
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.41
75 TestFunctional/serial/CacheCmd/cache/add_local 1.95
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 32.18
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.32
86 TestFunctional/serial/LogsFileCmd 1.36
87 TestFunctional/serial/InvalidService 4.3
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 13.81
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.92
97 TestFunctional/parallel/ServiceCmdConnect 13.73
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 33.98
101 TestFunctional/parallel/SSHCmd 0.42
102 TestFunctional/parallel/CpCmd 1.31
103 TestFunctional/parallel/MySQL 22.51
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.34
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
113 TestFunctional/parallel/License 0.3
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.21
119 TestFunctional/parallel/ServiceCmd/DeployApp 12.23
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 9.4
130 TestFunctional/parallel/ServiceCmd/List 0.47
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
133 TestFunctional/parallel/ServiceCmd/Format 0.37
134 TestFunctional/parallel/ServiceCmd/URL 0.4
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.39
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.38
140 TestFunctional/parallel/ImageCommands/Setup 1.49
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
145 TestFunctional/parallel/MountCmd/specific-port 1.97
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.18
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.84
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
150 TestFunctional/parallel/Version/short 0.05
151 TestFunctional/parallel/Version/components 0.68
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
161 TestMultiControlPlane/serial/StartCluster 188.99
162 TestMultiControlPlane/serial/DeployApp 5.83
163 TestMultiControlPlane/serial/PingHostFromPods 1.16
164 TestMultiControlPlane/serial/AddWorkerNode 55.58
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
167 TestMultiControlPlane/serial/CopyFile 12.41
180 TestJSONOutput/start/Command 77.25
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.65
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.58
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 6.58
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
208 TestMainNoArgs 0.05
209 TestMinikubeProfile 83.19
212 TestMountStart/serial/StartWithMountFirst 23.98
213 TestMountStart/serial/VerifyMountFirst 0.37
214 TestMountStart/serial/StartWithMountSecond 23.63
215 TestMountStart/serial/VerifyMountSecond 0.39
216 TestMountStart/serial/DeleteFirst 0.68
217 TestMountStart/serial/VerifyMountPostDelete 0.39
218 TestMountStart/serial/Stop 1.28
219 TestMountStart/serial/RestartStopped 23.37
220 TestMountStart/serial/VerifyMountPostStop 0.38
223 TestMultiNode/serial/FreshStart2Nodes 109.07
224 TestMultiNode/serial/DeployApp2Nodes 5.1
225 TestMultiNode/serial/PingHostFrom2Pods 0.76
226 TestMultiNode/serial/AddNode 50.83
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.55
229 TestMultiNode/serial/CopyFile 6.96
230 TestMultiNode/serial/StopNode 2.15
231 TestMultiNode/serial/StartAfterStop 37.48
233 TestMultiNode/serial/DeleteNode 2.12
235 TestMultiNode/serial/RestartMultiNode 198.86
236 TestMultiNode/serial/ValidateNameConflict 41.06
243 TestScheduledStopUnix 115.15
247 TestRunningBinaryUpgrade 125.74
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
253 TestNoKubernetes/serial/StartWithK8s 115.47
261 TestNetworkPlugins/group/false 4.62
265 TestNoKubernetes/serial/StartWithStopK8s 38.93
266 TestNoKubernetes/serial/Start 26.54
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
268 TestNoKubernetes/serial/ProfileList 1.31
269 TestNoKubernetes/serial/Stop 1.28
270 TestNoKubernetes/serial/StartNoArgs 41.95
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
272 TestStoppedBinaryUpgrade/Setup 0.43
273 TestStoppedBinaryUpgrade/Upgrade 133.9
282 TestPause/serial/Start 57.29
283 TestNetworkPlugins/group/auto/Start 75.01
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
285 TestNetworkPlugins/group/kindnet/Start 136.34
286 TestNetworkPlugins/group/calico/Start 371.51
287 TestPause/serial/SecondStartNoReconfiguration 81.37
288 TestNetworkPlugins/group/auto/KubeletFlags 0.27
289 TestNetworkPlugins/group/auto/NetCatPod 11.6
290 TestNetworkPlugins/group/auto/DNS 0.13
291 TestNetworkPlugins/group/auto/Localhost 0.13
292 TestNetworkPlugins/group/auto/HairPin 0.11
293 TestNetworkPlugins/group/custom-flannel/Start 165.63
294 TestPause/serial/Pause 0.84
295 TestPause/serial/VerifyStatus 0.29
296 TestPause/serial/Unpause 0.67
297 TestPause/serial/PauseAgain 0.83
298 TestPause/serial/DeletePaused 0.91
299 TestPause/serial/VerifyDeletedResources 0.68
300 TestNetworkPlugins/group/enable-default-cni/Start 57.08
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.31
304 TestNetworkPlugins/group/kindnet/DNS 0.14
305 TestNetworkPlugins/group/kindnet/Localhost 0.11
306 TestNetworkPlugins/group/kindnet/HairPin 0.11
307 TestNetworkPlugins/group/flannel/Start 80.98
308 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
310 TestNetworkPlugins/group/enable-default-cni/DNS 15.91
311 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
312 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
313 TestNetworkPlugins/group/bridge/Start 55.86
314 TestNetworkPlugins/group/flannel/ControllerPod 6.01
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
316 TestNetworkPlugins/group/flannel/NetCatPod 12.27
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
319 TestNetworkPlugins/group/flannel/DNS 0.15
320 TestNetworkPlugins/group/flannel/Localhost 0.13
321 TestNetworkPlugins/group/flannel/HairPin 0.12
322 TestNetworkPlugins/group/custom-flannel/DNS 0.17
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
326 TestNetworkPlugins/group/bridge/NetCatPod 11.26
330 TestStartStop/group/no-preload/serial/FirstStart 88.43
331 TestNetworkPlugins/group/bridge/DNS 0.16
332 TestNetworkPlugins/group/bridge/Localhost 0.12
333 TestNetworkPlugins/group/bridge/HairPin 0.15
335 TestStartStop/group/embed-certs/serial/FirstStart 81.22
336 TestStartStop/group/no-preload/serial/DeployApp 10.28
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
339 TestStartStop/group/embed-certs/serial/DeployApp 10.29
340 TestNetworkPlugins/group/calico/ControllerPod 6.01
341 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
343 TestNetworkPlugins/group/calico/KubeletFlags 0.21
344 TestNetworkPlugins/group/calico/NetCatPod 11.31
345 TestNetworkPlugins/group/calico/DNS 0.14
346 TestNetworkPlugins/group/calico/Localhost 0.12
347 TestNetworkPlugins/group/calico/HairPin 0.12
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.22
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
355 TestStartStop/group/no-preload/serial/SecondStart 642.42
358 TestStartStop/group/embed-certs/serial/SecondStart 562.66
360 TestStartStop/group/old-k8s-version/serial/Stop 6.29
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 491.65
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/newest-cni/serial/FirstStart 49.11
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
376 TestStartStop/group/newest-cni/serial/Stop 7.31
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
378 TestStartStop/group/newest-cni/serial/SecondStart 35.08
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
382 TestStartStop/group/newest-cni/serial/Pause 2.4
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.31
x
+
TestDownloadOnly/v1.20.0/json-events (43.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-779038 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-779038 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (43.352680467s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (43.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1104 10:37:32.032859   27218 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1104 10:37:32.032952   27218 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-779038
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-779038: exit status 85 (61.446244ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-779038 | jenkins | v1.34.0 | 04 Nov 24 10:36 UTC |          |
	|         | -p download-only-779038        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:36:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:36:48.719883   27230 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:36:48.719978   27230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:36:48.719982   27230 out.go:358] Setting ErrFile to fd 2...
	I1104 10:36:48.719986   27230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:36:48.720137   27230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	W1104 10:36:48.720249   27230 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19906-19898/.minikube/config/config.json: open /home/jenkins/minikube-integration/19906-19898/.minikube/config/config.json: no such file or directory
	I1104 10:36:48.720787   27230 out.go:352] Setting JSON to true
	I1104 10:36:48.721687   27230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4760,"bootTime":1730711849,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:36:48.721778   27230 start.go:139] virtualization: kvm guest
	I1104 10:36:48.724136   27230 out.go:97] [download-only-779038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1104 10:36:48.724238   27230 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball: no such file or directory
	I1104 10:36:48.724263   27230 notify.go:220] Checking for updates...
	I1104 10:36:48.725663   27230 out.go:169] MINIKUBE_LOCATION=19906
	I1104 10:36:48.726997   27230 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:36:48.728287   27230 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:36:48.729594   27230 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:36:48.731007   27230 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1104 10:36:48.733495   27230 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1104 10:36:48.733743   27230 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:36:48.834682   27230 out.go:97] Using the kvm2 driver based on user configuration
	I1104 10:36:48.834720   27230 start.go:297] selected driver: kvm2
	I1104 10:36:48.834726   27230 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:36:48.835054   27230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:36:48.835163   27230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:36:48.849587   27230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:36:48.849647   27230 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:36:48.850367   27230 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1104 10:36:48.850563   27230 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1104 10:36:48.850595   27230 cni.go:84] Creating CNI manager for ""
	I1104 10:36:48.850649   27230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:36:48.850660   27230 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 10:36:48.850724   27230 start.go:340] cluster config:
	{Name:download-only-779038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-779038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:36:48.850938   27230 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:36:48.852808   27230 out.go:97] Downloading VM boot image ...
	I1104 10:36:48.852838   27230 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1104 10:37:25.316174   27230 out.go:97] Starting "download-only-779038" primary control-plane node in "download-only-779038" cluster
	I1104 10:37:25.316211   27230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 10:37:25.350564   27230 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1104 10:37:25.350593   27230 cache.go:56] Caching tarball of preloaded images
	I1104 10:37:25.350766   27230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1104 10:37:25.352470   27230 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1104 10:37:25.352496   27230 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1104 10:37:25.380423   27230 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-779038 host does not exist
	  To start a cluster, run: "minikube start -p download-only-779038"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-779038
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-440707 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-440707 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.782422631s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1104 10:37:38.136267   27218 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1104 10:37:38.136338   27218 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-440707
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-440707: exit status 85 (61.318906ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-779038 | jenkins | v1.34.0 | 04 Nov 24 10:36 UTC |                     |
	|         | -p download-only-779038        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| delete  | -p download-only-779038        | download-only-779038 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC | 04 Nov 24 10:37 UTC |
	| start   | -o=json --download-only        | download-only-440707 | jenkins | v1.34.0 | 04 Nov 24 10:37 UTC |                     |
	|         | -p download-only-440707        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/04 10:37:32
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1104 10:37:32.392716   27537 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:37:32.392815   27537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:37:32.392823   27537 out.go:358] Setting ErrFile to fd 2...
	I1104 10:37:32.392827   27537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:37:32.393013   27537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:37:32.393536   27537 out.go:352] Setting JSON to true
	I1104 10:37:32.394327   27537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4803,"bootTime":1730711849,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:37:32.394418   27537 start.go:139] virtualization: kvm guest
	I1104 10:37:32.396831   27537 out.go:97] [download-only-440707] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:37:32.396941   27537 notify.go:220] Checking for updates...
	I1104 10:37:32.398257   27537 out.go:169] MINIKUBE_LOCATION=19906
	I1104 10:37:32.399716   27537 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:37:32.401041   27537 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:37:32.402404   27537 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:37:32.403619   27537 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1104 10:37:32.405988   27537 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1104 10:37:32.406186   27537 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:37:32.437218   27537 out.go:97] Using the kvm2 driver based on user configuration
	I1104 10:37:32.437250   27537 start.go:297] selected driver: kvm2
	I1104 10:37:32.437258   27537 start.go:901] validating driver "kvm2" against <nil>
	I1104 10:37:32.437587   27537 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:37:32.437660   27537 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19906-19898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1104 10:37:32.454610   27537 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1104 10:37:32.454672   27537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1104 10:37:32.455171   27537 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1104 10:37:32.455323   27537 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1104 10:37:32.455349   27537 cni.go:84] Creating CNI manager for ""
	I1104 10:37:32.455394   27537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1104 10:37:32.455402   27537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1104 10:37:32.455441   27537 start.go:340] cluster config:
	{Name:download-only-440707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-440707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:37:32.455523   27537 iso.go:125] acquiring lock: {Name:mk00c7dd6e02d348844f079f7574057e15cae010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1104 10:37:32.457290   27537 out.go:97] Starting "download-only-440707" primary control-plane node in "download-only-440707" cluster
	I1104 10:37:32.457308   27537 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:37:32.528595   27537 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:37:32.528624   27537 cache.go:56] Caching tarball of preloaded images
	I1104 10:37:32.528760   27537 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1104 10:37:32.530982   27537 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1104 10:37:32.531006   27537 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1104 10:37:32.563044   27537 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1104 10:37:36.774688   27537 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1104 10:37:36.774785   27537 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19906-19898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-440707 host does not exist
	  To start a cluster, run: "minikube start -p download-only-440707"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-440707
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1104 10:37:38.692169   27218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-739738 --alsologtostderr --binary-mirror http://127.0.0.1:45149 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-739738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-739738
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (101.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-263124 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-263124 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.74641625s)
helpers_test.go:175: Cleaning up "offline-crio-263124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-263124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-263124: (1.059297227s)
--- PASS: TestOffline (101.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-746456
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-746456: exit status 85 (59.980976ms)

                                                
                                                
-- stdout --
	* Profile "addons-746456" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-746456"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-746456
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-746456: exit status 85 (59.518891ms)

                                                
                                                
-- stdout --
	* Profile "addons-746456" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-746456"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (125.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-746456 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-746456 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m5.866818952s)
--- PASS: TestAddons/Setup (125.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.74s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-746456 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-746456 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-746456 get secret gcp-auth -n new-namespace: exit status 1 (88.849578ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-746456 logs -l app=gcp-auth -n gcp-auth
I1104 10:39:45.706964   27218 retry.go:31] will retry after 1.463160096s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/11/04 10:39:44 GCP Auth Webhook started!
	2024/11/04 10:39:45 Ready to marshal response ...
	2024/11/04 10:39:45 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-746456 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-746456 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-746456 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cbb88fd7-9ca0-443f-811a-4fb498e9f134] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cbb88fd7-9ca0-443f-811a-4fb498e9f134] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004490583s
addons_test.go:633: (dbg) Run:  kubectl --context addons-746456 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-746456 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-746456 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.874103ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-gh6ft" [8fa29892-d576-414b-9dbb-a78812ace5fd] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009094217s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r9qc2" [f8e1cbae-d518-45fa-8228-27e32339f030] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004223849s
addons_test.go:331: (dbg) Run:  kubectl --context addons-746456 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-746456 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-746456 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.60280966s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 ip
2024/11/04 10:40:19 [DEBUG] GET http://192.168.39.4:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lkgdl" [5731c12c-25ef-4fca-910b-d52eabd38ad5] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004489633s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 addons disable inspektor-gadget --alsologtostderr -v=1: (5.662693091s)
--- PASS: TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.118968ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-746456 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-746456 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eee51d16-27f9-4a20-a064-cc7aa943f0f5] Pending
helpers_test.go:344: "task-pv-pod" [eee51d16-27f9-4a20-a064-cc7aa943f0f5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [eee51d16-27f9-4a20-a064-cc7aa943f0f5] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004464896s
addons_test.go:511: (dbg) Run:  kubectl --context addons-746456 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-746456 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-746456 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-746456 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-746456 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-746456 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-746456 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ca745810-fc5d-4ea0-99a5-5e2abe634e9a] Pending
helpers_test.go:344: "task-pv-pod-restore" [ca745810-fc5d-4ea0-99a5-5e2abe634e9a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ca745810-fc5d-4ea0-99a5-5e2abe634e9a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003966276s
addons_test.go:553: (dbg) Run:  kubectl --context addons-746456 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-746456 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-746456 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.782407302s)
--- PASS: TestAddons/parallel/CSI (63.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-746456 --alsologtostderr -v=1
I1104 10:40:03.990658   27218 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-vj7j2" [30176b8e-97bc-49eb-9b68-d2652c673a50] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-vj7j2" [30176b8e-97bc-49eb-9b68-d2652c673a50] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-vj7j2" [30176b8e-97bc-49eb-9b68-d2652c673a50] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003845341s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 addons disable headlamp --alsologtostderr -v=1: (5.850675068s)
--- PASS: TestAddons/parallel/Headlamp (17.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-k8g5x" [9923c3c7-3ae3-4254-a6cf-5a747b90f240] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003746263s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-746456 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-746456 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ec3b860c-2048-426d-9936-d71369e7ab46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ec3b860c-2048-426d-9936-d71369e7ab46] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ec3b860c-2048-426d-9936-d71369e7ab46] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005063019s
addons_test.go:906: (dbg) Run:  kubectl --context addons-746456 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 ssh "cat /opt/local-path-provisioner/pvc-805b188f-c328-4e68-8920-c8c6b1f9c108_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-746456 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-746456 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.215519907s)
--- PASS: TestAddons/parallel/LocalPath (60.00s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-646xz" [2de93991-ff75-4ba5-814e-4fbe32bd9b24] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003656473s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ll6zv" [b9ac7507-cc18-43fb-b54b-82f4de9ba4a8] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004220081s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-746456 addons disable yakd --alsologtostderr -v=1: (5.679952254s)
--- PASS: TestAddons/parallel/Yakd (11.69s)

                                                
                                    
x
+
TestCertOptions (59.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-530572 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-530572 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (57.591753447s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-530572 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-530572 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-530572 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-530572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-530572
--- PASS: TestCertOptions (59.02s)

                                                
                                    
x
+
TestCertExpiration (307.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292397 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292397 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m26.728827709s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292397 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292397 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.346103029s)
helpers_test.go:175: Cleaning up "cert-expiration-292397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-292397
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-292397: (1.070710977s)
--- PASS: TestCertExpiration (307.15s)

                                                
                                    
x
+
TestForceSystemdFlag (61.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-293007 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-293007 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.477456458s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-293007 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-293007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-293007
--- PASS: TestForceSystemdFlag (61.47s)

                                                
                                    
x
+
TestForceSystemdEnv (40.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-302428 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-302428 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (39.367413611s)
helpers_test.go:175: Cleaning up "force-systemd-env-302428" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-302428
--- PASS: TestForceSystemdEnv (40.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.77s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1104 11:53:19.487737   27218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1104 11:53:19.487895   27218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1104 11:53:19.522046   27218 install.go:62] docker-machine-driver-kvm2: exit status 1
W1104 11:53:19.522346   27218 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1104 11:53:19.522398   27218 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2816862122/001/docker-machine-driver-kvm2
I1104 11:53:19.741890   27218 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2816862122/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40] Decompressors:map[bz2:0xc0004b50c0 gz:0xc0004b50c8 tar:0xc0004b5070 tar.bz2:0xc0004b5080 tar.gz:0xc0004b5090 tar.xz:0xc0004b50a0 tar.zst:0xc0004b50b0 tbz2:0xc0004b5080 tgz:0xc0004b5090 txz:0xc0004b50a0 tzst:0xc0004b50b0 xz:0xc0004b50d0 zip:0xc0004b50e0 zst:0xc0004b50d8] Getters:map[file:0xc001f3a6a0 http:0xc000892550 https:0xc0008925a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1104 11:53:19.741963   27218 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2816862122/001/docker-machine-driver-kvm2
I1104 11:53:21.592008   27218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1104 11:53:21.592104   27218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1104 11:53:21.621172   27218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1104 11:53:21.621217   27218 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1104 11:53:21.621304   27218 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1104 11:53:21.621339   27218 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2816862122/002/docker-machine-driver-kvm2
I1104 11:53:21.674852   27218 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2816862122/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40] Decompressors:map[bz2:0xc0004b50c0 gz:0xc0004b50c8 tar:0xc0004b5070 tar.bz2:0xc0004b5080 tar.gz:0xc0004b5090 tar.xz:0xc0004b50a0 tar.zst:0xc0004b50b0 tbz2:0xc0004b5080 tgz:0xc0004b5090 txz:0xc0004b50a0 tzst:0xc0004b50b0 xz:0xc0004b50d0 zip:0xc0004b50e0 zst:0xc0004b50d8] Getters:map[file:0xc001e8f530 http:0xc000719810 https:0xc000719860] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1104 11:53:21.674909   27218 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2816862122/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.77s)

                                                
                                    
x
+
TestErrorSpam/setup (40.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-842934 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-842934 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-842934 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-842934 --driver=kvm2  --container-runtime=crio: (40.622505135s)
--- PASS: TestErrorSpam/setup (40.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (5.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 stop: (1.577702297s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 stop: (1.787849172s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-842934 --log_dir /tmp/nospam-842934 stop: (1.690005293s)
--- PASS: TestErrorSpam/stop (5.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19906-19898/.minikube/files/etc/test/nested/copy/27218/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762465 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1104 10:49:47.409034   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:47.415475   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:47.426837   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:47.448244   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:47.489644   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:47.571082   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:47.732602   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:48.053995   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:48.696311   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:49.977926   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:52.539759   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:49:57.662455   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:50:07.903887   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-762465 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.945161031s)
--- PASS: TestFunctional/serial/StartWithProxy (54.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1104 10:50:11.043607   27218 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762465 --alsologtostderr -v=8
E1104 10:50:28.385607   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-762465 --alsologtostderr -v=8: (34.350642646s)
functional_test.go:663: soft start took 34.351259208s for "functional-762465" cluster.
I1104 10:50:45.394584   27218 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (34.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-762465 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 cache add registry.k8s.io/pause:3.1: (1.124775658s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 cache add registry.k8s.io/pause:3.3: (1.174885173s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 cache add registry.k8s.io/pause:latest: (1.113612224s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-762465 /tmp/TestFunctionalserialCacheCmdcacheadd_local3093400937/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cache add minikube-local-cache-test:functional-762465
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 cache add minikube-local-cache-test:functional-762465: (1.642849884s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cache delete minikube-local-cache-test:functional-762465
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-762465
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.677391ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 cache reload: (1.038363298s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 kubectl -- --context functional-762465 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-762465 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762465 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1104 10:51:09.347644   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-762465 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.182096094s)
functional_test.go:761: restart took 32.182200386s for "functional-762465" cluster.
I1104 10:51:25.402849   27218 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (32.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-762465 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 logs: (1.318605008s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 logs --file /tmp/TestFunctionalserialLogsFileCmd3032108051/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 logs --file /tmp/TestFunctionalserialLogsFileCmd3032108051/001/logs.txt: (1.361536757s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-762465 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-762465
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-762465: exit status 115 (275.593095ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.244:30983 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-762465 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 config get cpus: exit status 14 (61.487339ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 config get cpus: exit status 14 (69.87528ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-762465 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-762465 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 36262: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762465 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-762465 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.229532ms)

                                                
                                                
-- stdout --
	* [functional-762465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 10:51:48.476396   36125 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:51:48.476638   36125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:51:48.476647   36125 out.go:358] Setting ErrFile to fd 2...
	I1104 10:51:48.476651   36125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:51:48.476831   36125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:51:48.477320   36125 out.go:352] Setting JSON to false
	I1104 10:51:48.478398   36125 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5659,"bootTime":1730711849,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:51:48.478468   36125 start.go:139] virtualization: kvm guest
	I1104 10:51:48.480635   36125 out.go:177] * [functional-762465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 10:51:48.482028   36125 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:51:48.482053   36125 notify.go:220] Checking for updates...
	I1104 10:51:48.484496   36125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:51:48.485940   36125 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:51:48.487090   36125 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:51:48.488260   36125 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:51:48.489362   36125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:51:48.490897   36125 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:51:48.491328   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:51:48.491366   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:51:48.507620   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36413
	I1104 10:51:48.508101   36125 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:51:48.508643   36125 main.go:141] libmachine: Using API Version  1
	I1104 10:51:48.508667   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:51:48.509044   36125 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:51:48.509262   36125 main.go:141] libmachine: (functional-762465) Calling .DriverName
	I1104 10:51:48.510921   36125 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:51:48.511366   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:51:48.511456   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:51:48.530340   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1104 10:51:48.530836   36125 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:51:48.531294   36125 main.go:141] libmachine: Using API Version  1
	I1104 10:51:48.531310   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:51:48.531885   36125 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:51:48.532034   36125 main.go:141] libmachine: (functional-762465) Calling .DriverName
	I1104 10:51:48.566354   36125 out.go:177] * Using the kvm2 driver based on existing profile
	I1104 10:51:48.567481   36125 start.go:297] selected driver: kvm2
	I1104 10:51:48.567498   36125 start.go:901] validating driver "kvm2" against &{Name:functional-762465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-762465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:51:48.567616   36125 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:51:48.569585   36125 out.go:201] 
	W1104 10:51:48.570726   36125 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1104 10:51:48.571862   36125 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762465 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762465 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-762465 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.757489ms)

                                                
                                                
-- stdout --
	* [functional-762465] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 10:51:48.444904   36114 out.go:345] Setting OutFile to fd 1 ...
	I1104 10:51:48.445010   36114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:51:48.445019   36114 out.go:358] Setting ErrFile to fd 2...
	I1104 10:51:48.445023   36114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 10:51:48.445318   36114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 10:51:48.445832   36114 out.go:352] Setting JSON to false
	I1104 10:51:48.446800   36114 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5659,"bootTime":1730711849,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 10:51:48.446894   36114 start.go:139] virtualization: kvm guest
	I1104 10:51:48.450374   36114 out.go:177] * [functional-762465] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1104 10:51:48.452251   36114 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 10:51:48.452251   36114 notify.go:220] Checking for updates...
	I1104 10:51:48.453595   36114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 10:51:48.455475   36114 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 10:51:48.456864   36114 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 10:51:48.458226   36114 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 10:51:48.459531   36114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 10:51:48.461294   36114 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 10:51:48.461902   36114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:51:48.461973   36114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:51:48.479213   36114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I1104 10:51:48.479661   36114 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:51:48.480201   36114 main.go:141] libmachine: Using API Version  1
	I1104 10:51:48.480227   36114 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:51:48.480540   36114 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:51:48.480726   36114 main.go:141] libmachine: (functional-762465) Calling .DriverName
	I1104 10:51:48.480921   36114 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 10:51:48.481242   36114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 10:51:48.481284   36114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 10:51:48.496427   36114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34107
	I1104 10:51:48.496823   36114 main.go:141] libmachine: () Calling .GetVersion
	I1104 10:51:48.497326   36114 main.go:141] libmachine: Using API Version  1
	I1104 10:51:48.497351   36114 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 10:51:48.497647   36114 main.go:141] libmachine: () Calling .GetMachineName
	I1104 10:51:48.497782   36114 main.go:141] libmachine: (functional-762465) Calling .DriverName
	I1104 10:51:48.532865   36114 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1104 10:51:48.534114   36114 start.go:297] selected driver: kvm2
	I1104 10:51:48.534129   36114 start.go:901] validating driver "kvm2" against &{Name:functional-762465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-762465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1104 10:51:48.534249   36114 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 10:51:48.536336   36114 out.go:201] 
	W1104 10:51:48.537574   36114 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1104 10:51:48.538759   36114 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-762465 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-762465 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-n2gpb" [be11d8e3-730a-4eaf-908f-366b4a343e8d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-n2gpb" [be11d8e3-730a-4eaf-908f-366b4a343e8d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.004799057s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.244:32551
functional_test.go:1675: http://192.168.39.244:32551: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-n2gpb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.244:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.244:32551
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3e55c73e-7240-4c0a-ac7e-7b7f64e6305d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003307852s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-762465 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-762465 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-762465 get pvc myclaim -o=json
I1104 10:51:39.413706   27218 retry.go:31] will retry after 1.246620315s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:230bc9cd-82ae-463b-97d7-ee6067046860 ResourceVersion:708 Generation:0 CreationTimestamp:2024-11-04 10:51:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-230bc9cd-82ae-463b-97d7-ee6067046860 StorageClassName:0xc0018cc190 VolumeMode:0xc0018cc1a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-762465 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-762465 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02465ca5-3a16-4853-bb40-2138a5657b43] Pending
helpers_test.go:344: "sp-pod" [02465ca5-3a16-4853-bb40-2138a5657b43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02465ca5-3a16-4853-bb40-2138a5657b43] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004512118s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-762465 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-762465 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-762465 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4678cce5-803a-43b1-838c-b8378ee1d3c6] Pending
helpers_test.go:344: "sp-pod" [4678cce5-803a-43b1-838c-b8378ee1d3c6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4678cce5-803a-43b1-838c-b8378ee1d3c6] Running
2024/11/04 10:52:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.070771995s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-762465 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh -n functional-762465 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cp functional-762465:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3560719317/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh -n functional-762465 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh -n functional-762465 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-762465 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-w5jjg" [fbee88ff-a247-479c-a6a4-14c14248236f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-w5jjg" [fbee88ff-a247-479c-a6a4-14c14248236f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004347242s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-762465 exec mysql-6cdb49bbb-w5jjg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-762465 exec mysql-6cdb49bbb-w5jjg -- mysql -ppassword -e "show databases;": exit status 1 (113.84661ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1104 10:52:17.920289   27218 retry.go:31] will retry after 974.402194ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-762465 exec mysql-6cdb49bbb-w5jjg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-762465 exec mysql-6cdb49bbb-w5jjg -- mysql -ppassword -e "show databases;": exit status 1 (109.950519ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1104 10:52:19.005598   27218 retry.go:31] will retry after 975.652698ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-762465 exec mysql-6cdb49bbb-w5jjg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/27218/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo cat /etc/test/nested/copy/27218/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/27218.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo cat /etc/ssl/certs/27218.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/27218.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo cat /usr/share/ca-certificates/27218.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/272182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo cat /etc/ssl/certs/272182.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/272182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo cat /usr/share/ca-certificates/272182.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-762465 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh "sudo systemctl is-active docker": exit status 1 (222.087538ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh "sudo systemctl is-active containerd": exit status 1 (213.075359ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-762465 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-762465 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-762465 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-762465 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 34972: os: process already finished
helpers_test.go:502: unable to terminate pid 34990: os: process already finished
helpers_test.go:502: unable to terminate pid 35076: os: process already finished
helpers_test.go:508: unable to kill pid 34933: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-762465 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-762465 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d34dddbd-6c6e-4864-bb15-3e7f799d4f1f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d34dddbd-6c6e-4864-bb15-3e7f799d4f1f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003666442s
I1104 10:51:43.503392   27218 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-762465 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-762465 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-xr8hx" [03334c0a-fb87-4aef-ab12-fb63425b58ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-xr8hx" [03334c0a-fb87-4aef-ab12-fb63425b58ae] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003751845s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-762465 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.90.139 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-762465 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 35485: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "375.938706ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.764861ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "374.63464ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.582537ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdany-port2565791720/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730717505139002570" to /tmp/TestFunctionalparallelMountCmdany-port2565791720/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730717505139002570" to /tmp/TestFunctionalparallelMountCmdany-port2565791720/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730717505139002570" to /tmp/TestFunctionalparallelMountCmdany-port2565791720/001/test-1730717505139002570
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.081762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1104 10:51:45.386441   27218 retry.go:31] will retry after 368.002879ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  4 10:51 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  4 10:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  4 10:51 test-1730717505139002570
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh cat /mount-9p/test-1730717505139002570
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-762465 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f4d6b5a4-d50b-4b9a-8337-b9e8790a057e] Pending
helpers_test.go:344: "busybox-mount" [f4d6b5a4-d50b-4b9a-8337-b9e8790a057e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f4d6b5a4-d50b-4b9a-8337-b9e8790a057e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f4d6b5a4-d50b-4b9a-8337-b9e8790a057e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004414541s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-762465 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdany-port2565791720/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 service list -o json
functional_test.go:1494: Took "537.782234ms" to run "out/minikube-linux-amd64 -p functional-762465 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.244:31366
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.244:31366
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762465 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-762465
localhost/kicbase/echo-server:functional-762465
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762465 image ls --format short --alsologtostderr:
I1104 10:51:58.668628   37209 out.go:345] Setting OutFile to fd 1 ...
I1104 10:51:58.668742   37209 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:51:58.668752   37209 out.go:358] Setting ErrFile to fd 2...
I1104 10:51:58.668758   37209 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:51:58.668967   37209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
I1104 10:51:58.669598   37209 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:51:58.669708   37209 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:51:58.670058   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:51:58.670106   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:51:58.685448   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
I1104 10:51:58.685910   37209 main.go:141] libmachine: () Calling .GetVersion
I1104 10:51:58.686495   37209 main.go:141] libmachine: Using API Version  1
I1104 10:51:58.686516   37209 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:51:58.686857   37209 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:51:58.687056   37209 main.go:141] libmachine: (functional-762465) Calling .GetState
I1104 10:51:58.688765   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:51:58.688806   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:51:58.703642   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38105
I1104 10:51:58.704143   37209 main.go:141] libmachine: () Calling .GetVersion
I1104 10:51:58.704580   37209 main.go:141] libmachine: Using API Version  1
I1104 10:51:58.704600   37209 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:51:58.704913   37209 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:51:58.705163   37209 main.go:141] libmachine: (functional-762465) Calling .DriverName
I1104 10:51:58.705370   37209 ssh_runner.go:195] Run: systemctl --version
I1104 10:51:58.705394   37209 main.go:141] libmachine: (functional-762465) Calling .GetSSHHostname
I1104 10:51:58.708181   37209 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:51:58.708546   37209 main.go:141] libmachine: (functional-762465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:4d:0a", ip: ""} in network mk-functional-762465: {Iface:virbr1 ExpiryTime:2024-11-04 11:49:30 +0000 UTC Type:0 Mac:52:54:00:c3:4d:0a Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-762465 Clientid:01:52:54:00:c3:4d:0a}
I1104 10:51:58.708575   37209 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined IP address 192.168.39.244 and MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:51:58.708771   37209 main.go:141] libmachine: (functional-762465) Calling .GetSSHPort
I1104 10:51:58.708961   37209 main.go:141] libmachine: (functional-762465) Calling .GetSSHKeyPath
I1104 10:51:58.709111   37209 main.go:141] libmachine: (functional-762465) Calling .GetSSHUsername
I1104 10:51:58.709282   37209 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/functional-762465/id_rsa Username:docker}
I1104 10:51:58.821356   37209 ssh_runner.go:195] Run: sudo crictl images --output json
I1104 10:51:58.892410   37209 main.go:141] libmachine: Making call to close driver server
I1104 10:51:58.892422   37209 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:51:58.892708   37209 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
I1104 10:51:58.892708   37209 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:51:58.892747   37209 main.go:141] libmachine: Making call to close connection to plugin binary
I1104 10:51:58.892757   37209 main.go:141] libmachine: Making call to close driver server
I1104 10:51:58.892773   37209 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:51:58.892980   37209 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:51:58.893004   37209 main.go:141] libmachine: Making call to close connection to plugin binary
I1104 10:51:58.893017   37209 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762465 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | cb8f91112b6b5 | 48.4MB |
| localhost/kicbase/echo-server           | functional-762465  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-762465  | ab9877224663a | 1.47MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-762465  | 7b04ac8446736 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762465 image ls --format table --alsologtostderr:
I1104 10:52:02.634802   37372 out.go:345] Setting OutFile to fd 1 ...
I1104 10:52:02.634902   37372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:52:02.634912   37372 out.go:358] Setting ErrFile to fd 2...
I1104 10:52:02.634920   37372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:52:02.635171   37372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
I1104 10:52:02.635940   37372 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:52:02.636105   37372 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:52:02.636614   37372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:52:02.636660   37372 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:52:02.653150   37372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
I1104 10:52:02.653668   37372 main.go:141] libmachine: () Calling .GetVersion
I1104 10:52:02.654162   37372 main.go:141] libmachine: Using API Version  1
I1104 10:52:02.654187   37372 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:52:02.654534   37372 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:52:02.654699   37372 main.go:141] libmachine: (functional-762465) Calling .GetState
I1104 10:52:02.656445   37372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:52:02.656490   37372 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:52:02.672639   37372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
I1104 10:52:02.673104   37372 main.go:141] libmachine: () Calling .GetVersion
I1104 10:52:02.673632   37372 main.go:141] libmachine: Using API Version  1
I1104 10:52:02.673656   37372 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:52:02.674538   37372 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:52:02.674739   37372 main.go:141] libmachine: (functional-762465) Calling .DriverName
I1104 10:52:02.674966   37372 ssh_runner.go:195] Run: systemctl --version
I1104 10:52:02.674989   37372 main.go:141] libmachine: (functional-762465) Calling .GetSSHHostname
I1104 10:52:02.678290   37372 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:52:02.678779   37372 main.go:141] libmachine: (functional-762465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:4d:0a", ip: ""} in network mk-functional-762465: {Iface:virbr1 ExpiryTime:2024-11-04 11:49:30 +0000 UTC Type:0 Mac:52:54:00:c3:4d:0a Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-762465 Clientid:01:52:54:00:c3:4d:0a}
I1104 10:52:02.678806   37372 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined IP address 192.168.39.244 and MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:52:02.678928   37372 main.go:141] libmachine: (functional-762465) Calling .GetSSHPort
I1104 10:52:02.679069   37372 main.go:141] libmachine: (functional-762465) Calling .GetSSHKeyPath
I1104 10:52:02.679224   37372 main.go:141] libmachine: (functional-762465) Calling .GetSSHUsername
I1104 10:52:02.679383   37372 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/functional-762465/id_rsa Username:docker}
I1104 10:52:02.769479   37372 ssh_runner.go:195] Run: sudo crictl images --output json
I1104 10:52:02.970009   37372 main.go:141] libmachine: Making call to close driver server
I1104 10:52:02.970027   37372 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:52:02.970371   37372 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:52:02.970382   37372 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
I1104 10:52:02.970388   37372 main.go:141] libmachine: Making call to close connection to plugin binary
I1104 10:52:02.970416   37372 main.go:141] libmachine: Making call to close driver server
I1104 10:52:02.970424   37372 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:52:02.970687   37372 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
I1104 10:52:02.970754   37372 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:52:02.970806   37372 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762465 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ab98772246
63a89c80f71f407f3f77c6d5ac6c722db64c9ca9944b26cce02824","repoDigests":["localhost/my-image@sha256:016a0427ec1a9a0257caf1863bb8b286288f7c879913f6347b488f7a0381c22d"],"repoTags":["localhost/my-image:functional-762465"],"size":"1468599"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd285
6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"e35de3654bdd60d3f998753baad216bba1c106d6a636a26b9d27afbcc164a9b3","repoDigests":["docker.io/library/c87d9dcee7ef30b89ad526d94be13e4262e9dbf662376de5edc1159e1fcfbf83-tmp@sha256:dd7dcbc260a5d5575576d93d1024922602ed715b98a4fa609e2b733792f6852d"],"repoTags":[],"size":"1466018"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045","repoDigests":["docker.io/l
ibrary/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48414943"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"7b04ac8446736ef3a0228d2c1e74d93dcd926be08f378eb2bda4264be3eed414","repoDigests":["localhost/minikube-local-cache-test@sha256:71bc5c3133ce67c0f825253908e4412d25047e497c602c04156305ce9fc5ed64"],"repoTags":["localhost/minikube-local-cache-test:functional-762465"],"size":"3330"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba1
6ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"07655ddf2eebe5d250f7a72c25f
638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d0
3e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-762465"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5
f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762465 image ls --format json --alsologtostderr:
I1104 10:52:02.410599   37324 out.go:345] Setting OutFile to fd 1 ...
I1104 10:52:02.410712   37324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:52:02.410722   37324 out.go:358] Setting ErrFile to fd 2...
I1104 10:52:02.410726   37324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:52:02.410912   37324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
I1104 10:52:02.411447   37324 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:52:02.411544   37324 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:52:02.411904   37324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:52:02.411944   37324 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:52:02.426197   37324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
I1104 10:52:02.426657   37324 main.go:141] libmachine: () Calling .GetVersion
I1104 10:52:02.427294   37324 main.go:141] libmachine: Using API Version  1
I1104 10:52:02.427318   37324 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:52:02.427681   37324 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:52:02.427893   37324 main.go:141] libmachine: (functional-762465) Calling .GetState
I1104 10:52:02.429778   37324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:52:02.429814   37324 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:52:02.444853   37324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
I1104 10:52:02.445247   37324 main.go:141] libmachine: () Calling .GetVersion
I1104 10:52:02.445717   37324 main.go:141] libmachine: Using API Version  1
I1104 10:52:02.445741   37324 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:52:02.446140   37324 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:52:02.446308   37324 main.go:141] libmachine: (functional-762465) Calling .DriverName
I1104 10:52:02.446503   37324 ssh_runner.go:195] Run: systemctl --version
I1104 10:52:02.446528   37324 main.go:141] libmachine: (functional-762465) Calling .GetSSHHostname
I1104 10:52:02.449446   37324 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:52:02.449780   37324 main.go:141] libmachine: (functional-762465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:4d:0a", ip: ""} in network mk-functional-762465: {Iface:virbr1 ExpiryTime:2024-11-04 11:49:30 +0000 UTC Type:0 Mac:52:54:00:c3:4d:0a Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-762465 Clientid:01:52:54:00:c3:4d:0a}
I1104 10:52:02.449807   37324 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined IP address 192.168.39.244 and MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:52:02.449937   37324 main.go:141] libmachine: (functional-762465) Calling .GetSSHPort
I1104 10:52:02.450122   37324 main.go:141] libmachine: (functional-762465) Calling .GetSSHKeyPath
I1104 10:52:02.450250   37324 main.go:141] libmachine: (functional-762465) Calling .GetSSHUsername
I1104 10:52:02.450375   37324 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/functional-762465/id_rsa Username:docker}
I1104 10:52:02.530759   37324 ssh_runner.go:195] Run: sudo crictl images --output json
I1104 10:52:02.572435   37324 main.go:141] libmachine: Making call to close driver server
I1104 10:52:02.572446   37324 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:52:02.572716   37324 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
I1104 10:52:02.572743   37324 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:52:02.572763   37324 main.go:141] libmachine: Making call to close connection to plugin binary
I1104 10:52:02.572777   37324 main.go:141] libmachine: Making call to close driver server
I1104 10:52:02.572790   37324 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:52:02.573007   37324 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:52:02.573021   37324 main.go:141] libmachine: Making call to close connection to plugin binary
I1104 10:52:02.573040   37324 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762465 image ls --format yaml --alsologtostderr:
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7b04ac8446736ef3a0228d2c1e74d93dcd926be08f378eb2bda4264be3eed414
repoDigests:
- localhost/minikube-local-cache-test@sha256:71bc5c3133ce67c0f825253908e4412d25047e497c602c04156305ce9fc5ed64
repoTags:
- localhost/minikube-local-cache-test:functional-762465
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3
repoTags:
- docker.io/library/nginx:alpine
size: "48414943"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-762465
size: "4943877"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762465 image ls --format yaml --alsologtostderr:
I1104 10:51:58.953986   37232 out.go:345] Setting OutFile to fd 1 ...
I1104 10:51:58.954306   37232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:51:58.954323   37232 out.go:358] Setting ErrFile to fd 2...
I1104 10:51:58.954330   37232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:51:58.954618   37232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
I1104 10:51:58.955440   37232 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:51:58.955588   37232 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:51:58.956174   37232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:51:58.956231   37232 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:51:58.970908   37232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
I1104 10:51:58.971414   37232 main.go:141] libmachine: () Calling .GetVersion
I1104 10:51:58.971974   37232 main.go:141] libmachine: Using API Version  1
I1104 10:51:58.971996   37232 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:51:58.972321   37232 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:51:58.972523   37232 main.go:141] libmachine: (functional-762465) Calling .GetState
I1104 10:51:58.974425   37232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:51:58.974485   37232 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:51:58.988982   37232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
I1104 10:51:58.989381   37232 main.go:141] libmachine: () Calling .GetVersion
I1104 10:51:58.989851   37232 main.go:141] libmachine: Using API Version  1
I1104 10:51:58.989872   37232 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:51:58.990212   37232 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:51:58.990394   37232 main.go:141] libmachine: (functional-762465) Calling .DriverName
I1104 10:51:58.990599   37232 ssh_runner.go:195] Run: systemctl --version
I1104 10:51:58.990621   37232 main.go:141] libmachine: (functional-762465) Calling .GetSSHHostname
I1104 10:51:58.993172   37232 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:51:58.993567   37232 main.go:141] libmachine: (functional-762465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:4d:0a", ip: ""} in network mk-functional-762465: {Iface:virbr1 ExpiryTime:2024-11-04 11:49:30 +0000 UTC Type:0 Mac:52:54:00:c3:4d:0a Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-762465 Clientid:01:52:54:00:c3:4d:0a}
I1104 10:51:58.993595   37232 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined IP address 192.168.39.244 and MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:51:58.993712   37232 main.go:141] libmachine: (functional-762465) Calling .GetSSHPort
I1104 10:51:58.993899   37232 main.go:141] libmachine: (functional-762465) Calling .GetSSHKeyPath
I1104 10:51:58.994051   37232 main.go:141] libmachine: (functional-762465) Calling .GetSSHUsername
I1104 10:51:58.994210   37232 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/functional-762465/id_rsa Username:docker}
I1104 10:51:59.106769   37232 ssh_runner.go:195] Run: sudo crictl images --output json
I1104 10:51:59.201090   37232 main.go:141] libmachine: Making call to close driver server
I1104 10:51:59.201106   37232 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:51:59.201478   37232 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
I1104 10:51:59.201509   37232 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:51:59.201523   37232 main.go:141] libmachine: Making call to close connection to plugin binary
I1104 10:51:59.201531   37232 main.go:141] libmachine: Making call to close driver server
I1104 10:51:59.201539   37232 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:51:59.201770   37232 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:51:59.201787   37232 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh pgrep buildkitd: exit status 1 (231.549413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image build -t localhost/my-image:functional-762465 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 image build -t localhost/my-image:functional-762465 testdata/build --alsologtostderr: (2.916291577s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762465 image build -t localhost/my-image:functional-762465 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e35de3654bd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-762465
--> ab987722466
Successfully tagged localhost/my-image:functional-762465
ab9877224663a89c80f71f407f3f77c6d5ac6c722db64c9ca9944b26cce02824
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762465 image build -t localhost/my-image:functional-762465 testdata/build --alsologtostderr:
I1104 10:51:59.525781   37302 out.go:345] Setting OutFile to fd 1 ...
I1104 10:51:59.525965   37302 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:51:59.525979   37302 out.go:358] Setting ErrFile to fd 2...
I1104 10:51:59.525985   37302 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1104 10:51:59.526368   37302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
I1104 10:51:59.527350   37302 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:51:59.527954   37302 config.go:182] Loaded profile config "functional-762465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1104 10:51:59.528455   37302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:51:59.528505   37302 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:51:59.545556   37302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
I1104 10:51:59.546121   37302 main.go:141] libmachine: () Calling .GetVersion
I1104 10:51:59.546682   37302 main.go:141] libmachine: Using API Version  1
I1104 10:51:59.546701   37302 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:51:59.547074   37302 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:51:59.547305   37302 main.go:141] libmachine: (functional-762465) Calling .GetState
I1104 10:51:59.549439   37302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1104 10:51:59.549489   37302 main.go:141] libmachine: Launching plugin server for driver kvm2
I1104 10:51:59.564827   37302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
I1104 10:51:59.565501   37302 main.go:141] libmachine: () Calling .GetVersion
I1104 10:51:59.566026   37302 main.go:141] libmachine: Using API Version  1
I1104 10:51:59.566061   37302 main.go:141] libmachine: () Calling .SetConfigRaw
I1104 10:51:59.566401   37302 main.go:141] libmachine: () Calling .GetMachineName
I1104 10:51:59.566608   37302 main.go:141] libmachine: (functional-762465) Calling .DriverName
I1104 10:51:59.566814   37302 ssh_runner.go:195] Run: systemctl --version
I1104 10:51:59.566846   37302 main.go:141] libmachine: (functional-762465) Calling .GetSSHHostname
I1104 10:51:59.569838   37302 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:51:59.570282   37302 main.go:141] libmachine: (functional-762465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:4d:0a", ip: ""} in network mk-functional-762465: {Iface:virbr1 ExpiryTime:2024-11-04 11:49:30 +0000 UTC Type:0 Mac:52:54:00:c3:4d:0a Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-762465 Clientid:01:52:54:00:c3:4d:0a}
I1104 10:51:59.570304   37302 main.go:141] libmachine: (functional-762465) DBG | domain functional-762465 has defined IP address 192.168.39.244 and MAC address 52:54:00:c3:4d:0a in network mk-functional-762465
I1104 10:51:59.570568   37302 main.go:141] libmachine: (functional-762465) Calling .GetSSHPort
I1104 10:51:59.570709   37302 main.go:141] libmachine: (functional-762465) Calling .GetSSHKeyPath
I1104 10:51:59.570820   37302 main.go:141] libmachine: (functional-762465) Calling .GetSSHUsername
I1104 10:51:59.570926   37302 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/functional-762465/id_rsa Username:docker}
I1104 10:51:59.683687   37302 build_images.go:161] Building image from path: /tmp/build.1901152729.tar
I1104 10:51:59.683761   37302 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1104 10:51:59.693374   37302 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1901152729.tar
I1104 10:51:59.697572   37302 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1901152729.tar: stat -c "%s %y" /var/lib/minikube/build/build.1901152729.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1901152729.tar': No such file or directory
I1104 10:51:59.697610   37302 ssh_runner.go:362] scp /tmp/build.1901152729.tar --> /var/lib/minikube/build/build.1901152729.tar (3072 bytes)
I1104 10:51:59.722892   37302 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1901152729
I1104 10:51:59.732090   37302 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1901152729 -xf /var/lib/minikube/build/build.1901152729.tar
I1104 10:51:59.741512   37302 crio.go:315] Building image: /var/lib/minikube/build/build.1901152729
I1104 10:51:59.741583   37302 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-762465 /var/lib/minikube/build/build.1901152729 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1104 10:52:02.330370   37302 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-762465 /var/lib/minikube/build/build.1901152729 --cgroup-manager=cgroupfs: (2.588760439s)
I1104 10:52:02.330460   37302 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1901152729
I1104 10:52:02.341377   37302 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1901152729.tar
I1104 10:52:02.351060   37302 build_images.go:217] Built localhost/my-image:functional-762465 from /tmp/build.1901152729.tar
I1104 10:52:02.351089   37302 build_images.go:133] succeeded building to: functional-762465
I1104 10:52:02.351094   37302 build_images.go:134] failed building to: 
I1104 10:52:02.351112   37302 main.go:141] libmachine: Making call to close driver server
I1104 10:52:02.351123   37302 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:52:02.351385   37302 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
I1104 10:52:02.351451   37302 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:52:02.351468   37302 main.go:141] libmachine: Making call to close connection to plugin binary
I1104 10:52:02.351482   37302 main.go:141] libmachine: Making call to close driver server
I1104 10:52:02.351492   37302 main.go:141] libmachine: (functional-762465) Calling .Close
I1104 10:52:02.351784   37302 main.go:141] libmachine: (functional-762465) DBG | Closing plugin on server side
I1104 10:52:02.351786   37302 main.go:141] libmachine: Successfully made call to close driver server
I1104 10:52:02.351824   37302 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.470316167s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-762465
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image load --daemon kicbase/echo-server:functional-762465 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-762465 image load --daemon kicbase/echo-server:functional-762465 --alsologtostderr: (1.136243116s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image load --daemon kicbase/echo-server:functional-762465 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-762465
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image load --daemon kicbase/echo-server:functional-762465 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image save kicbase/echo-server:functional-762465 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdspecific-port2167901154/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.70866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1104 10:51:54.770671   27218 retry.go:31] will retry after 651.016938ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdspecific-port2167901154/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh "sudo umount -f /mount-9p": exit status 1 (212.083251ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-762465 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdspecific-port2167901154/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image rm kicbase/echo-server:functional-762465 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-762465
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 image save --daemon kicbase/echo-server:functional-762465 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-762465
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821617294/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821617294/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821617294/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T" /mount1: exit status 1 (287.501182ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1104 10:51:56.801609   27218 retry.go:31] will retry after 623.872041ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-762465 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821617294/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821617294/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762465 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821617294/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-762465 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-762465
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-762465
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-762465
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (188.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-931571 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1104 10:52:31.269132   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:54:47.411111   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:55:15.110707   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-931571 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m8.349353854s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (188.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-931571 -- rollout status deployment/busybox: (3.820343084s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-lqgb9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-nslmz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-w9wmp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-lqgb9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-nslmz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-w9wmp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-lqgb9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-nslmz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-w9wmp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-lqgb9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-lqgb9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-nslmz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-nslmz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-w9wmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-931571 -- exec busybox-7dff88458-w9wmp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-931571 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-931571 -v=7 --alsologtostderr: (54.76861395s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-931571 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1104 10:56:33.165086   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:56:33.171504   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:56:33.182915   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:56:33.204321   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:56:33.245751   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:56:33.327188   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:56:33.488698   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 status --output json -v=7 --alsologtostderr
E1104 10:56:33.810287   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 10:56:34.452155   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp testdata/cp-test.txt ha-931571:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571:/home/docker/cp-test.txt ha-931571-m02:/home/docker/cp-test_ha-931571_ha-931571-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test.txt"
E1104 10:56:35.734455   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test_ha-931571_ha-931571-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571:/home/docker/cp-test.txt ha-931571-m03:/home/docker/cp-test_ha-931571_ha-931571-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test_ha-931571_ha-931571-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571:/home/docker/cp-test.txt ha-931571-m04:/home/docker/cp-test_ha-931571_ha-931571-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test_ha-931571_ha-931571-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp testdata/cp-test.txt ha-931571-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m02:/home/docker/cp-test.txt ha-931571:/home/docker/cp-test_ha-931571-m02_ha-931571.txt
E1104 10:56:38.295696   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test_ha-931571-m02_ha-931571.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m02:/home/docker/cp-test.txt ha-931571-m03:/home/docker/cp-test_ha-931571-m02_ha-931571-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test_ha-931571-m02_ha-931571-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m02:/home/docker/cp-test.txt ha-931571-m04:/home/docker/cp-test_ha-931571-m02_ha-931571-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test_ha-931571-m02_ha-931571-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp testdata/cp-test.txt ha-931571-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt ha-931571:/home/docker/cp-test_ha-931571-m03_ha-931571.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test_ha-931571-m03_ha-931571.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt ha-931571-m02:/home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test_ha-931571-m03_ha-931571-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m03:/home/docker/cp-test.txt ha-931571-m04:/home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test_ha-931571-m03_ha-931571-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp testdata/cp-test.txt ha-931571-m04:/home/docker/cp-test.txt
E1104 10:56:43.417442   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2369318263/001/cp-test_ha-931571-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt ha-931571:/home/docker/cp-test_ha-931571-m04_ha-931571.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571 "sudo cat /home/docker/cp-test_ha-931571-m04_ha-931571.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt ha-931571-m02:/home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m02 "sudo cat /home/docker/cp-test_ha-931571-m04_ha-931571-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 cp ha-931571-m04:/home/docker/cp-test.txt ha-931571-m03:/home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-931571 ssh -n ha-931571-m03 "sudo cat /home/docker/cp-test_ha-931571-m04_ha-931571-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-069996 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-069996 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.245621356s)
--- PASS: TestJSONOutput/start/Command (77.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-069996 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-069996 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.58s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-069996 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-069996 --output=json --user=testUser: (6.584487049s)
--- PASS: TestJSONOutput/stop/Command (6.58s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-846494 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-846494 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.794707ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eed260e2-bd91-4562-8f49-41194a8c6d0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-846494] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee9d3af5-198d-4e92-8a77-79cee9fb6783","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19906"}}
	{"specversion":"1.0","id":"176fc555-0e00-4431-b4fb-558213b06cd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f19192d0-fa27-4554-97ec-d91b3f00f578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig"}}
	{"specversion":"1.0","id":"40a0e941-0614-4fab-97f2-91555da7b554","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube"}}
	{"specversion":"1.0","id":"4f80a73e-9a3e-4eff-9dce-e0af3e59b917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dca43116-3dbc-4f0a-8f6a-3050277ac860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02986312-1992-4573-9a41-7cecc918bd5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-846494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-846494
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (83.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-721082 --driver=kvm2  --container-runtime=crio
E1104 11:24:47.409208   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-721082 --driver=kvm2  --container-runtime=crio: (42.569380849s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-731170 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-731170 --driver=kvm2  --container-runtime=crio: (37.817819403s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-721082
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-731170
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-731170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-731170
helpers_test.go:175: Cleaning up "first-721082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-721082
--- PASS: TestMinikubeProfile (83.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-673389 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-673389 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.977158434s)
E1104 11:26:33.165374   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountFirst (23.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-673389 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-673389 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-689615 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-689615 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.628771376s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689615 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689615 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-673389 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689615 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689615 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-689615
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-689615: (1.278147434s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-689615
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-689615: (22.366315014s)
--- PASS: TestMountStart/serial/RestartStopped (23.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689615 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-689615 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453447 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453447 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.682989905s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-453447 -- rollout status deployment/busybox: (3.657567552s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-4kmjz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-cgg7m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-4kmjz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-cgg7m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-4kmjz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-cgg7m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-4kmjz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-4kmjz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-cgg7m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453447 -- exec busybox-7dff88458-cgg7m -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-453447 -v 3 --alsologtostderr
E1104 11:29:36.234342   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:29:47.409304   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-453447 -v 3 --alsologtostderr: (50.293221408s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-453447 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp testdata/cp-test.txt multinode-453447:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1426244323/001/cp-test_multinode-453447.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447:/home/docker/cp-test.txt multinode-453447-m02:/home/docker/cp-test_multinode-453447_multinode-453447-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m02 "sudo cat /home/docker/cp-test_multinode-453447_multinode-453447-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447:/home/docker/cp-test.txt multinode-453447-m03:/home/docker/cp-test_multinode-453447_multinode-453447-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m03 "sudo cat /home/docker/cp-test_multinode-453447_multinode-453447-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp testdata/cp-test.txt multinode-453447-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1426244323/001/cp-test_multinode-453447-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt multinode-453447:/home/docker/cp-test_multinode-453447-m02_multinode-453447.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447 "sudo cat /home/docker/cp-test_multinode-453447-m02_multinode-453447.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447-m02:/home/docker/cp-test.txt multinode-453447-m03:/home/docker/cp-test_multinode-453447-m02_multinode-453447-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m03 "sudo cat /home/docker/cp-test_multinode-453447-m02_multinode-453447-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp testdata/cp-test.txt multinode-453447-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1426244323/001/cp-test_multinode-453447-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt multinode-453447:/home/docker/cp-test_multinode-453447-m03_multinode-453447.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447 "sudo cat /home/docker/cp-test_multinode-453447-m03_multinode-453447.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 cp multinode-453447-m03:/home/docker/cp-test.txt multinode-453447-m02:/home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 ssh -n multinode-453447-m02 "sudo cat /home/docker/cp-test_multinode-453447-m03_multinode-453447-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-453447 node stop m03: (1.342437983s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453447 status: exit status 7 (401.672415ms)

                                                
                                                
-- stdout --
	multinode-453447
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-453447-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-453447-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr: exit status 7 (405.04981ms)

                                                
                                                
-- stdout --
	multinode-453447
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-453447-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-453447-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:30:20.584236   55969 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:30:20.584329   55969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:30:20.584336   55969 out.go:358] Setting ErrFile to fd 2...
	I1104 11:30:20.584340   55969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:30:20.584506   55969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:30:20.584660   55969 out.go:352] Setting JSON to false
	I1104 11:30:20.584680   55969 mustload.go:65] Loading cluster: multinode-453447
	I1104 11:30:20.584779   55969 notify.go:220] Checking for updates...
	I1104 11:30:20.585036   55969 config.go:182] Loaded profile config "multinode-453447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:30:20.585054   55969 status.go:174] checking status of multinode-453447 ...
	I1104 11:30:20.585493   55969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:30:20.585545   55969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:30:20.600673   55969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I1104 11:30:20.601157   55969 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:30:20.601777   55969 main.go:141] libmachine: Using API Version  1
	I1104 11:30:20.601803   55969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:30:20.602219   55969 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:30:20.602468   55969 main.go:141] libmachine: (multinode-453447) Calling .GetState
	I1104 11:30:20.604084   55969 status.go:371] multinode-453447 host status = "Running" (err=<nil>)
	I1104 11:30:20.604097   55969 host.go:66] Checking if "multinode-453447" exists ...
	I1104 11:30:20.604421   55969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:30:20.604465   55969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:30:20.620042   55969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I1104 11:30:20.620524   55969 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:30:20.621003   55969 main.go:141] libmachine: Using API Version  1
	I1104 11:30:20.621027   55969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:30:20.621319   55969 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:30:20.621488   55969 main.go:141] libmachine: (multinode-453447) Calling .GetIP
	I1104 11:30:20.624378   55969 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:30:20.624890   55969 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:30:20.624933   55969 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:30:20.625052   55969 host.go:66] Checking if "multinode-453447" exists ...
	I1104 11:30:20.625519   55969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:30:20.625567   55969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:30:20.641071   55969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I1104 11:30:20.641552   55969 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:30:20.642098   55969 main.go:141] libmachine: Using API Version  1
	I1104 11:30:20.642120   55969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:30:20.642460   55969 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:30:20.642644   55969 main.go:141] libmachine: (multinode-453447) Calling .DriverName
	I1104 11:30:20.642812   55969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1104 11:30:20.642849   55969 main.go:141] libmachine: (multinode-453447) Calling .GetSSHHostname
	I1104 11:30:20.645647   55969 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:30:20.646077   55969 main.go:141] libmachine: (multinode-453447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:5b:45", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:27:39 +0000 UTC Type:0 Mac:52:54:00:9c:5b:45 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-453447 Clientid:01:52:54:00:9c:5b:45}
	I1104 11:30:20.646102   55969 main.go:141] libmachine: (multinode-453447) DBG | domain multinode-453447 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:5b:45 in network mk-multinode-453447
	I1104 11:30:20.646217   55969 main.go:141] libmachine: (multinode-453447) Calling .GetSSHPort
	I1104 11:30:20.646358   55969 main.go:141] libmachine: (multinode-453447) Calling .GetSSHKeyPath
	I1104 11:30:20.646509   55969 main.go:141] libmachine: (multinode-453447) Calling .GetSSHUsername
	I1104 11:30:20.646686   55969 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447/id_rsa Username:docker}
	I1104 11:30:20.727907   55969 ssh_runner.go:195] Run: systemctl --version
	I1104 11:30:20.733339   55969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:30:20.746338   55969 kubeconfig.go:125] found "multinode-453447" server: "https://192.168.39.86:8443"
	I1104 11:30:20.746376   55969 api_server.go:166] Checking apiserver status ...
	I1104 11:30:20.746424   55969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1104 11:30:20.759352   55969 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1051/cgroup
	W1104 11:30:20.768429   55969 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1051/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1104 11:30:20.768475   55969 ssh_runner.go:195] Run: ls
	I1104 11:30:20.773252   55969 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I1104 11:30:20.777428   55969 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I1104 11:30:20.777451   55969 status.go:463] multinode-453447 apiserver status = Running (err=<nil>)
	I1104 11:30:20.777462   55969 status.go:176] multinode-453447 status: &{Name:multinode-453447 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1104 11:30:20.777488   55969 status.go:174] checking status of multinode-453447-m02 ...
	I1104 11:30:20.777769   55969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:30:20.777804   55969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:30:20.793776   55969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I1104 11:30:20.794289   55969 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:30:20.794812   55969 main.go:141] libmachine: Using API Version  1
	I1104 11:30:20.794834   55969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:30:20.795162   55969 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:30:20.795324   55969 main.go:141] libmachine: (multinode-453447-m02) Calling .GetState
	I1104 11:30:20.797030   55969 status.go:371] multinode-453447-m02 host status = "Running" (err=<nil>)
	I1104 11:30:20.797047   55969 host.go:66] Checking if "multinode-453447-m02" exists ...
	I1104 11:30:20.797567   55969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:30:20.797619   55969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:30:20.812652   55969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I1104 11:30:20.813084   55969 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:30:20.813630   55969 main.go:141] libmachine: Using API Version  1
	I1104 11:30:20.813656   55969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:30:20.813941   55969 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:30:20.814113   55969 main.go:141] libmachine: (multinode-453447-m02) Calling .GetIP
	I1104 11:30:20.816864   55969 main.go:141] libmachine: (multinode-453447-m02) DBG | domain multinode-453447-m02 has defined MAC address 52:54:00:a3:01:36 in network mk-multinode-453447
	I1104 11:30:20.817291   55969 main.go:141] libmachine: (multinode-453447-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:01:36", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:28:41 +0000 UTC Type:0 Mac:52:54:00:a3:01:36 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-453447-m02 Clientid:01:52:54:00:a3:01:36}
	I1104 11:30:20.817325   55969 main.go:141] libmachine: (multinode-453447-m02) DBG | domain multinode-453447-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:a3:01:36 in network mk-multinode-453447
	I1104 11:30:20.817467   55969 host.go:66] Checking if "multinode-453447-m02" exists ...
	I1104 11:30:20.817819   55969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:30:20.817879   55969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:30:20.832536   55969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41487
	I1104 11:30:20.832936   55969 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:30:20.833422   55969 main.go:141] libmachine: Using API Version  1
	I1104 11:30:20.833441   55969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:30:20.833731   55969 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:30:20.833887   55969 main.go:141] libmachine: (multinode-453447-m02) Calling .DriverName
	I1104 11:30:20.834034   55969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1104 11:30:20.834056   55969 main.go:141] libmachine: (multinode-453447-m02) Calling .GetSSHHostname
	I1104 11:30:20.836445   55969 main.go:141] libmachine: (multinode-453447-m02) DBG | domain multinode-453447-m02 has defined MAC address 52:54:00:a3:01:36 in network mk-multinode-453447
	I1104 11:30:20.836818   55969 main.go:141] libmachine: (multinode-453447-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:01:36", ip: ""} in network mk-multinode-453447: {Iface:virbr1 ExpiryTime:2024-11-04 12:28:41 +0000 UTC Type:0 Mac:52:54:00:a3:01:36 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-453447-m02 Clientid:01:52:54:00:a3:01:36}
	I1104 11:30:20.836847   55969 main.go:141] libmachine: (multinode-453447-m02) DBG | domain multinode-453447-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:a3:01:36 in network mk-multinode-453447
	I1104 11:30:20.837003   55969 main.go:141] libmachine: (multinode-453447-m02) Calling .GetSSHPort
	I1104 11:30:20.837161   55969 main.go:141] libmachine: (multinode-453447-m02) Calling .GetSSHKeyPath
	I1104 11:30:20.837287   55969 main.go:141] libmachine: (multinode-453447-m02) Calling .GetSSHUsername
	I1104 11:30:20.837409   55969 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19906-19898/.minikube/machines/multinode-453447-m02/id_rsa Username:docker}
	I1104 11:30:20.911975   55969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1104 11:30:20.925075   55969 status.go:176] multinode-453447-m02 status: &{Name:multinode-453447-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1104 11:30:20.925105   55969 status.go:174] checking status of multinode-453447-m03 ...
	I1104 11:30:20.925498   55969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1104 11:30:20.925537   55969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1104 11:30:20.940721   55969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I1104 11:30:20.941244   55969 main.go:141] libmachine: () Calling .GetVersion
	I1104 11:30:20.941784   55969 main.go:141] libmachine: Using API Version  1
	I1104 11:30:20.941805   55969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1104 11:30:20.942202   55969 main.go:141] libmachine: () Calling .GetMachineName
	I1104 11:30:20.942377   55969 main.go:141] libmachine: (multinode-453447-m03) Calling .GetState
	I1104 11:30:20.944000   55969 status.go:371] multinode-453447-m03 host status = "Stopped" (err=<nil>)
	I1104 11:30:20.944011   55969 status.go:384] host is not running, skipping remaining checks
	I1104 11:30:20.944016   55969 status.go:176] multinode-453447-m03 status: &{Name:multinode-453447-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-453447 node start m03 -v=7 --alsologtostderr: (36.886040869s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-453447 node delete m03: (1.571913877s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (198.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453447 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1104 11:39:30.479215   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:39:47.412273   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:41:33.164798   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453447 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.354942681s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453447 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (198.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453447
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453447-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-453447-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.77733ms)

                                                
                                                
-- stdout --
	* [multinode-453447-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-453447-m02' is duplicated with machine name 'multinode-453447-m02' in profile 'multinode-453447'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453447-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453447-m03 --driver=kvm2  --container-runtime=crio: (39.97580114s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-453447
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-453447: exit status 80 (207.519022ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-453447 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-453447-m03 already exists in multinode-453447-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-453447-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.06s)

                                                
                                    
x
+
TestScheduledStopUnix (115.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-968642 --memory=2048 --driver=kvm2  --container-runtime=crio
E1104 11:46:16.238569   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-968642 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.554612656s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-968642 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-968642 -n scheduled-stop-968642
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-968642 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1104 11:46:20.200071   27218 retry.go:31] will retry after 96.939µs: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.201265   27218 retry.go:31] will retry after 165.5µs: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.202439   27218 retry.go:31] will retry after 177.471µs: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.203611   27218 retry.go:31] will retry after 290.217µs: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.204778   27218 retry.go:31] will retry after 684.436µs: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.205939   27218 retry.go:31] will retry after 413.085µs: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.207105   27218 retry.go:31] will retry after 1.178891ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.209296   27218 retry.go:31] will retry after 2.163691ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.212522   27218 retry.go:31] will retry after 3.292721ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.216736   27218 retry.go:31] will retry after 3.02739ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.219869   27218 retry.go:31] will retry after 7.154255ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.228119   27218 retry.go:31] will retry after 12.663893ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.241355   27218 retry.go:31] will retry after 18.186025ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.260610   27218 retry.go:31] will retry after 17.772416ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
I1104 11:46:20.278846   27218 retry.go:31] will retry after 29.075725ms: open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/scheduled-stop-968642/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-968642 --cancel-scheduled
E1104 11:46:33.166401   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-968642 -n scheduled-stop-968642
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-968642
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-968642 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-968642
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-968642: exit status 7 (64.690815ms)

                                                
                                                
-- stdout --
	scheduled-stop-968642
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-968642 -n scheduled-stop-968642
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-968642 -n scheduled-stop-968642: exit status 7 (63.882972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-968642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-968642
--- PASS: TestScheduledStopUnix (115.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (125.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3503227296 start -p running-upgrade-975889 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3503227296 start -p running-upgrade-975889 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (53.096313951s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-975889 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-975889 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.897345019s)
helpers_test.go:175: Cleaning up "running-upgrade-975889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-975889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-975889: (1.219610383s)
--- PASS: TestRunningBinaryUpgrade (125.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278038 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-278038 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (95.826007ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-278038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (115.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278038 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278038 --driver=kvm2  --container-runtime=crio: (1m55.235886929s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-278038 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (115.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-528108 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-528108 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (971.406848ms)

                                                
                                                
-- stdout --
	* [false-528108] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19906
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1104 11:48:14.601805   64071 out.go:345] Setting OutFile to fd 1 ...
	I1104 11:48:14.601924   64071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:48:14.601932   64071 out.go:358] Setting ErrFile to fd 2...
	I1104 11:48:14.601939   64071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1104 11:48:14.602231   64071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19906-19898/.minikube/bin
	I1104 11:48:14.602926   64071 out.go:352] Setting JSON to false
	I1104 11:48:14.604183   64071 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9046,"bootTime":1730711849,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1104 11:48:14.604307   64071 start.go:139] virtualization: kvm guest
	I1104 11:48:14.606755   64071 out.go:177] * [false-528108] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1104 11:48:14.608224   64071 out.go:177]   - MINIKUBE_LOCATION=19906
	I1104 11:48:14.608282   64071 notify.go:220] Checking for updates...
	I1104 11:48:14.610898   64071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1104 11:48:14.612292   64071 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19906-19898/kubeconfig
	I1104 11:48:14.613481   64071 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19906-19898/.minikube
	I1104 11:48:14.614774   64071 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1104 11:48:14.615991   64071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1104 11:48:14.617733   64071 config.go:182] Loaded profile config "NoKubernetes-278038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:48:14.617880   64071 config.go:182] Loaded profile config "kubernetes-upgrade-313751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1104 11:48:14.618003   64071 config.go:182] Loaded profile config "offline-crio-263124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1104 11:48:14.618109   64071 driver.go:394] Setting default libvirt URI to qemu:///system
	I1104 11:48:15.512771   64071 out.go:177] * Using the kvm2 driver based on user configuration
	I1104 11:48:15.513945   64071 start.go:297] selected driver: kvm2
	I1104 11:48:15.513958   64071 start.go:901] validating driver "kvm2" against <nil>
	I1104 11:48:15.513969   64071 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1104 11:48:15.515943   64071 out.go:201] 
	W1104 11:48:15.517040   64071 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1104 11:48:15.518281   64071 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-528108 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-528108" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-528108

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-528108"

                                                
                                                
----------------------- debugLogs end: false-528108 [took: 3.476430894s] --------------------------------
helpers_test.go:175: Cleaning up "false-528108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-528108
--- PASS: TestNetworkPlugins/group/false (4.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278038 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1104 11:49:47.409414   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278038 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.916311266s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-278038 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-278038 status -o json: exit status 2 (220.39777ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-278038","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-278038
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278038 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278038 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.538137551s)
--- PASS: TestNoKubernetes/serial/Start (26.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-278038 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-278038 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.227076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-278038
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-278038: (1.282809044s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278038 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278038 --driver=kvm2  --container-runtime=crio: (41.954234694s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-278038 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-278038 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.940159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (133.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1340808401 start -p stopped-upgrade-894910 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1104 11:51:33.168282   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1340808401 start -p stopped-upgrade-894910 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m15.05629925s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1340808401 -p stopped-upgrade-894910 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1340808401 -p stopped-upgrade-894910 stop: (1.379258314s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-894910 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-894910 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.460848416s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (133.90s)

                                                
                                    
x
+
TestPause/serial/Start (57.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-706038 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-706038 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (57.294497077s)
--- PASS: TestPause/serial/Start (57.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.00611296s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-894910
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (136.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (2m16.336685725s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (136.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (371.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (6m11.508241952s)
--- PASS: TestNetworkPlugins/group/calico/Start (371.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (81.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-706038 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-706038 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.327545064s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (81.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-528108 "pgrep -a kubelet"
I1104 11:54:46.985149   27218 config.go:182] Loaded profile config "auto-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-528108 replace --force -f testdata/netcat-deployment.yaml
E1104 11:54:47.408941   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
I1104 11:54:47.551157   27218 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1104 11:54:47.566675   27218 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q5x9d" [81c8d2fe-eadf-4bbd-89ec-eb546875ee35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q5x9d" [81c8d2fe-eadf-4bbd-89ec-eb546875ee35] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004069672s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-528108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (165.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m45.634818102s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (165.63s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-706038 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-706038 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-706038 --output=json --layout=cluster: exit status 2 (290.090128ms)

                                                
                                                
-- stdout --
	{"Name":"pause-706038","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-706038","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-706038 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-706038 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-706038 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (57.076611554s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8dc79" [7aa2af67-1e92-4d94-abd4-c5e7be79a484] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004881911s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-528108 "pgrep -a kubelet"
I1104 11:55:57.087017   27218 config.go:182] Loaded profile config "kindnet-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-528108 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g9j4m" [7319892b-f4a2-4c89-bba6-934692b71dd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g9j4m" [7319892b-f4a2-4c89-bba6-934692b71dd7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003027405s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-528108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1104 11:56:33.164789   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.981359825s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-528108 "pgrep -a kubelet"
I1104 11:56:43.561407   27218 config.go:182] Loaded profile config "enable-default-cni-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-528108 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l5ws4" [b32f6760-b5f8-42ef-9754-20bfe66d49df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l5ws4" [b32f6760-b5f8-42ef-9754-20bfe66d49df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004217605s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (15.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-528108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-528108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13975022s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1104 11:57:07.922958   27218 retry.go:31] will retry after 599.457684ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-528108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (15.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-528108 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (55.858664324s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n6nmx" [9d61d857-c374-4485-8e7c-348da13c554b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004910323s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-528108 "pgrep -a kubelet"
I1104 11:57:52.741808   27218 config.go:182] Loaded profile config "flannel-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-528108 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9cg77" [b5e28c19-a7a9-45c4-87ac-62c2d9519592] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9cg77" [b5e28c19-a7a9-45c4-87ac-62c2d9519592] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.00519927s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-528108 "pgrep -a kubelet"
I1104 11:58:00.986148   27218 config.go:182] Loaded profile config "custom-flannel-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-528108 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mdrwm" [381f4687-c134-4cb3-92ac-eb9b8dae9f73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mdrwm" [381f4687-c134-4cb3-92ac-eb9b8dae9f73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004665307s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-528108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-528108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-528108 "pgrep -a kubelet"
I1104 11:58:19.769924   27218 config.go:182] Loaded profile config "bridge-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-528108 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-65c29" [e8e952f5-6c30-4687-b119-cb108183879b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-65c29" [e8e952f5-6c30-4687-b119-cb108183879b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004447655s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-908370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-908370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m28.425260107s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-528108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-325116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1104 11:59:47.409443   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:47.537103   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:47.543548   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:47.555042   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:47.576473   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:47.617898   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:47.699369   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:47.861011   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:48.183083   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:48.824801   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:50.106308   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 11:59:52.667688   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-325116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m21.223028993s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-908370 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [211134d2-72ed-4243-818e-81755db54f57] Pending
E1104 11:59:57.789932   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [211134d2-72ed-4243-818e-81755db54f57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [211134d2-72ed-4243-818e-81755db54f57] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003839492s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-908370 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-908370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1104 12:00:08.031464   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-908370 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-325116 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [faedbe05-e667-443f-9df2-18bb9bf19f99] Pending
helpers_test.go:344: "busybox" [faedbe05-e667-443f-9df2-18bb9bf19f99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [faedbe05-e667-443f-9df2-18bb9bf19f99] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003704912s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-325116 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jhj7c" [d25c41b7-40fa-48e2-afff-6d04561ef649] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00394217s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-325116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-325116 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-528108 "pgrep -a kubelet"
I1104 12:00:21.162938   27218 config.go:182] Loaded profile config "calico-528108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-528108 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tl9hl" [61696c4d-f09d-4f35-97c1-5d0334ca85e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tl9hl" [61696c4d-f09d-4f35-97c1-5d0334ca85e2] Running
E1104 12:00:28.513584   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004407949s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-528108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-528108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-036892 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1104 12:00:50.828737   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:50.835137   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:50.846572   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:50.867972   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:50.909376   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:50.990863   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:51.152617   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:51.474498   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:52.116267   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:53.397707   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:00:55.959020   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:01.080689   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:09.475741   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:11.322434   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:31.803867   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:33.164945   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-036892 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (50.223656946s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ddc847de-e4e6-4c3d-b91d-835709a0fc1e] Pending
helpers_test.go:344: "busybox" [ddc847de-e4e6-4c3d-b91d-835709a0fc1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ddc847de-e4e6-4c3d-b91d-835709a0fc1e] Running
E1104 12:01:43.769632   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:43.776099   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:43.787469   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:43.808866   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:43.850890   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:43.932420   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:44.094003   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:44.415424   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:45.057713   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:01:46.339881   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003936301s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-036892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-036892 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (642.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-908370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-908370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m42.15689386s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-908370 -n no-preload-908370
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (642.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (562.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-325116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1104 12:02:51.632438   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:56.240460   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/functional-762465/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:02:56.754682   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.267999   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.274389   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.285757   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.307150   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.348584   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.430079   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.591874   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:01.913560   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:02.555703   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:03.837957   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:05.710246   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/enable-default-cni-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:06.400064   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:06.996145   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:11.521350   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.020132   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.026512   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.038221   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.059619   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.101086   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.182566   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.344727   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:20.666403   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:21.308153   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:21.763119   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:22.589762   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:25.151539   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:27.477870   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:30.272886   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:34.687981   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/kindnet-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:40.514298   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/bridge-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:03:42.244817   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/custom-flannel-528108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-325116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m22.410837691s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-325116 -n embed-certs-325116
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (562.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-589257 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-589257 --alsologtostderr -v=3: (6.289586217s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (491.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-036892 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-036892 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (8m11.396203785s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (491.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-589257 -n old-k8s-version-589257: exit status 7 (63.314145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-589257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-374564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-374564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (49.111313793s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-374564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-374564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057590108s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-374564 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-374564 --alsologtostderr -v=3: (7.312523264s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374564 -n newest-cni-374564
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374564 -n newest-cni-374564: exit status 7 (64.274186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-374564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-374564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1104 12:29:30.485598   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:47.409135   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/addons-746456/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:47.536823   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/auto-528108/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.167067   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.173490   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.184958   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.206373   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.247763   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.329203   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.490877   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
E1104 12:29:57.812666   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-374564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (34.694286228s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374564 -n newest-cni-374564
E1104 12:29:58.454259   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-374564 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-374564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374564 -n newest-cni-374564
E1104 12:29:59.735652   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374564 -n newest-cni-374564: exit status 2 (234.998303ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374564 -n newest-cni-374564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374564 -n newest-cni-374564: exit status 2 (228.53938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-374564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374564 -n newest-cni-374564
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374564 -n newest-cni-374564
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-036892 image list --format=json
E1104 12:30:38.143555   27218 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19906-19898/.minikube/profiles/no-preload-908370/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-036892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892: exit status 2 (235.164316ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892: exit status 2 (243.93466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-036892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-036892 -n default-k8s-diff-port-036892
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.31s)

                                                
                                    

Test skip (34/320)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-746456 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-528108 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-528108" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-528108

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-528108"

                                                
                                                
----------------------- debugLogs end: kubenet-528108 [took: 2.757177378s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-528108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-528108
--- SKIP: TestNetworkPlugins/group/kubenet (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-528108 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-528108" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-528108

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-528108" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-528108"

                                                
                                                
----------------------- debugLogs end: cilium-528108 [took: 5.219139257s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-528108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-528108
--- SKIP: TestNetworkPlugins/group/cilium (5.39s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-457408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-457408
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard